00:00:00.001 Started by upstream project "autotest-per-patch" build number 124197 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.039 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:07.039 The recommended git tool is: git 00:00:07.039 using credential 00000000-0000-0000-0000-000000000002 00:00:07.041 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:07.054 Fetching changes from the remote Git repository 00:00:07.057 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:07.069 Using shallow fetch with depth 1 00:00:07.069 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:07.069 > git --version # timeout=10 00:00:07.080 > git --version # 'git version 2.39.2' 00:00:07.080 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:07.091 Setting http proxy: proxy-dmz.intel.com:911 00:00:07.091 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:15.684 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:15.696 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:15.707 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:15.707 > git config core.sparsecheckout # timeout=10 00:00:15.717 > git read-tree -mu HEAD # timeout=10 00:00:15.733 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:15.752 Commit message: "pool: fixes for VisualBuild class" 00:00:15.752 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:15.833 [Pipeline] Start of Pipeline 00:00:15.855 [Pipeline] library 00:00:15.856 Loading library shm_lib@master 00:00:15.857 Library shm_lib@master is cached. Copying from home. 00:00:15.877 [Pipeline] node 00:00:30.879 Still waiting to schedule task 00:00:30.880 Waiting for next available executor on ‘DiskNvme&&NetCVL’ 00:08:36.742 Running on WFP20 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:08:36.744 [Pipeline] { 00:08:36.762 [Pipeline] catchError 00:08:36.766 [Pipeline] { 00:08:36.785 [Pipeline] wrap 00:08:36.797 [Pipeline] { 00:08:36.803 [Pipeline] stage 00:08:36.805 [Pipeline] { (Prologue) 00:08:37.011 [Pipeline] sh 00:08:37.292 + logger -p user.info -t JENKINS-CI 00:08:37.311 [Pipeline] echo 00:08:37.312 Node: WFP20 00:08:37.320 [Pipeline] sh 00:08:37.616 [Pipeline] setCustomBuildProperty 00:08:37.630 [Pipeline] echo 00:08:37.632 Cleanup processes 00:08:37.637 [Pipeline] sh 00:08:37.921 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:37.922 3641076 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:37.937 [Pipeline] sh 00:08:38.224 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:38.224 ++ grep -v 'sudo pgrep' 00:08:38.224 ++ awk '{print $1}' 00:08:38.224 + sudo kill -9 00:08:38.224 + true 00:08:38.239 [Pipeline] cleanWs 00:08:38.250 [WS-CLEANUP] Deleting project workspace... 00:08:38.250 [WS-CLEANUP] Deferred wipeout is used... 00:08:38.257 [WS-CLEANUP] done 00:08:38.263 [Pipeline] setCustomBuildProperty 00:08:38.281 [Pipeline] sh 00:08:38.564 + sudo git config --global --replace-all safe.directory '*' 00:08:38.636 [Pipeline] nodesByLabel 00:08:38.638 Found a total of 2 nodes with the 'sorcerer' label 00:08:38.647 [Pipeline] httpRequest 00:08:38.652 HttpMethod: GET 00:08:38.652 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:08:38.654 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:08:38.681 Response Code: HTTP/1.1 200 OK 00:08:38.682 Success: Status code 200 is in the accepted range: 200,404 00:08:38.682 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:08:38.828 [Pipeline] sh 00:08:39.111 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:08:39.128 [Pipeline] httpRequest 00:08:39.133 HttpMethod: GET 00:08:39.134 URL: http://10.211.164.101/packages/spdk_1e8a0c991f0e61e22e668387df823eb65422beb5.tar.gz 00:08:39.135 Sending request to url: http://10.211.164.101/packages/spdk_1e8a0c991f0e61e22e668387df823eb65422beb5.tar.gz 00:08:39.159 Response Code: HTTP/1.1 200 OK 00:08:39.160 Success: Status code 200 is in the accepted range: 200,404 00:08:39.160 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_1e8a0c991f0e61e22e668387df823eb65422beb5.tar.gz 00:08:41.344 [Pipeline] sh 00:08:41.629 + tar --no-same-owner -xf spdk_1e8a0c991f0e61e22e668387df823eb65422beb5.tar.gz 00:08:44.932 [Pipeline] sh 00:08:45.215 + git -C spdk log --oneline -n5 00:08:45.215 1e8a0c991 nvme: Get NVM Identify Namespace Data for Extended LBA Format 00:08:45.215 493b11851 nvme: Use Host Behavior Support Feature to enable LBA Format Extension 00:08:45.215 e2612f201 nvme: Factor out getting ZNS Identify Namespace Data 00:08:45.215 93e13a7a6 nvme_spec: Add IOCS Identify Namespace Data for NVM command set 00:08:45.215 e55c9a812 vbdev_error: decrement error_num atomically 00:08:45.227 [Pipeline] } 00:08:45.246 [Pipeline] // stage 00:08:45.256 [Pipeline] stage 00:08:45.259 [Pipeline] { (Prepare) 00:08:45.279 [Pipeline] writeFile 00:08:45.296 [Pipeline] sh 00:08:45.578 + logger -p user.info -t JENKINS-CI 00:08:45.592 [Pipeline] sh 00:08:45.875 + logger -p user.info -t JENKINS-CI 00:08:45.891 [Pipeline] sh 00:08:46.174 + cat autorun-spdk.conf 00:08:46.174 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:46.174 SPDK_TEST_NVMF=1 00:08:46.174 SPDK_TEST_NVME_CLI=1 00:08:46.174 SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:46.174 SPDK_TEST_NVMF_NICS=e810 00:08:46.174 SPDK_TEST_VFIOUSER=1 00:08:46.174 SPDK_RUN_UBSAN=1 00:08:46.174 NET_TYPE=phy 00:08:46.181 RUN_NIGHTLY=0 00:08:46.188 [Pipeline] readFile 00:08:46.217 [Pipeline] withEnv 00:08:46.219 [Pipeline] { 00:08:46.236 [Pipeline] sh 00:08:46.522 + set -ex 00:08:46.522 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:08:46.522 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:08:46.522 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:46.522 ++ SPDK_TEST_NVMF=1 00:08:46.522 ++ SPDK_TEST_NVME_CLI=1 00:08:46.522 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:46.522 ++ SPDK_TEST_NVMF_NICS=e810 00:08:46.522 ++ SPDK_TEST_VFIOUSER=1 00:08:46.522 ++ SPDK_RUN_UBSAN=1 00:08:46.522 ++ NET_TYPE=phy 00:08:46.522 ++ RUN_NIGHTLY=0 00:08:46.522 + case $SPDK_TEST_NVMF_NICS in 00:08:46.522 + DRIVERS=ice 00:08:46.522 + [[ tcp == \r\d\m\a ]] 00:08:46.522 + [[ -n ice ]] 00:08:46.522 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:08:46.522 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:08:53.094 rmmod: ERROR: Module irdma is not currently loaded 00:08:53.094 rmmod: ERROR: Module i40iw is not currently loaded 00:08:53.094 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:08:53.094 + true 00:08:53.094 + for D in $DRIVERS 00:08:53.094 + sudo modprobe ice 00:08:53.094 + exit 0 00:08:53.103 [Pipeline] } 00:08:53.122 [Pipeline] // withEnv 00:08:53.128 [Pipeline] } 00:08:53.144 [Pipeline] // stage 00:08:53.154 [Pipeline] catchError 00:08:53.156 [Pipeline] { 00:08:53.171 [Pipeline] timeout 00:08:53.171 Timeout set to expire in 50 min 00:08:53.173 [Pipeline] { 00:08:53.186 [Pipeline] stage 00:08:53.188 [Pipeline] { (Tests) 00:08:53.200 [Pipeline] sh 00:08:53.479 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:08:53.479 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:08:53.479 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:08:53.479 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:08:53.479 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:53.479 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:08:53.479 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:08:53.479 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:08:53.479 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:08:53.479 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:08:53.479 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:08:53.479 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:08:53.479 + source /etc/os-release 00:08:53.479 ++ NAME='Fedora Linux' 00:08:53.479 ++ VERSION='38 (Cloud Edition)' 00:08:53.479 ++ ID=fedora 00:08:53.479 ++ VERSION_ID=38 00:08:53.479 ++ VERSION_CODENAME= 00:08:53.479 ++ PLATFORM_ID=platform:f38 00:08:53.479 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:08:53.479 ++ ANSI_COLOR='0;38;2;60;110;180' 00:08:53.479 ++ LOGO=fedora-logo-icon 00:08:53.479 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:08:53.479 ++ HOME_URL=https://fedoraproject.org/ 00:08:53.480 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:08:53.480 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:08:53.480 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:08:53.480 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:08:53.480 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:08:53.480 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:08:53.480 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:08:53.480 ++ SUPPORT_END=2024-05-14 00:08:53.480 ++ VARIANT='Cloud Edition' 00:08:53.480 ++ VARIANT_ID=cloud 00:08:53.480 + uname -a 00:08:53.480 Linux spdk-wfp-20 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:08:53.480 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:08:57.675 Hugepages 00:08:57.675 node hugesize free / total 00:08:57.675 node0 1048576kB 0 / 0 00:08:57.675 node0 2048kB 0 / 0 00:08:57.675 node1 1048576kB 0 / 0 00:08:57.675 node1 2048kB 0 / 0 00:08:57.675 00:08:57.675 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:57.675 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:08:57.675 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:08:57.675 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:08:57.675 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:08:57.675 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:08:57.675 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:08:57.675 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:08:57.675 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:08:57.675 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:08:57.675 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:08:57.675 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:08:57.675 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:08:57.675 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:08:57.675 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:08:57.675 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:08:57.675 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:08:57.675 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:08:57.675 + rm -f /tmp/spdk-ld-path 00:08:57.675 + source autorun-spdk.conf 00:08:57.675 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:57.675 ++ SPDK_TEST_NVMF=1 00:08:57.675 ++ SPDK_TEST_NVME_CLI=1 00:08:57.675 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:57.675 ++ SPDK_TEST_NVMF_NICS=e810 00:08:57.675 ++ SPDK_TEST_VFIOUSER=1 00:08:57.675 ++ SPDK_RUN_UBSAN=1 00:08:57.675 ++ NET_TYPE=phy 00:08:57.675 ++ RUN_NIGHTLY=0 00:08:57.675 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:08:57.675 + [[ -n '' ]] 00:08:57.675 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:57.675 + for M in /var/spdk/build-*-manifest.txt 00:08:57.675 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:08:57.675 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:08:57.675 + for M in /var/spdk/build-*-manifest.txt 00:08:57.675 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:08:57.675 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:08:57.675 ++ uname 00:08:57.675 + [[ Linux == \L\i\n\u\x ]] 00:08:57.675 + sudo dmesg -T 00:08:57.675 + sudo dmesg --clear 00:08:57.675 + dmesg_pid=3642145 00:08:57.675 + [[ Fedora Linux == FreeBSD ]] 00:08:57.675 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:57.675 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:57.675 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:57.675 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:08:57.675 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:08:57.675 + [[ -x /usr/src/fio-static/fio ]] 00:08:57.675 + sudo dmesg -Tw 00:08:57.675 + export FIO_BIN=/usr/src/fio-static/fio 00:08:57.675 + FIO_BIN=/usr/src/fio-static/fio 00:08:57.675 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:08:57.675 + [[ ! -v VFIO_QEMU_BIN ]] 00:08:57.675 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:08:57.675 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:57.675 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:57.675 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:08:57.675 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:57.676 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:57.676 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:08:57.676 Test configuration: 00:08:57.676 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:57.676 SPDK_TEST_NVMF=1 00:08:57.676 SPDK_TEST_NVME_CLI=1 00:08:57.676 SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:57.676 SPDK_TEST_NVMF_NICS=e810 00:08:57.676 SPDK_TEST_VFIOUSER=1 00:08:57.676 SPDK_RUN_UBSAN=1 00:08:57.676 NET_TYPE=phy 00:08:57.676 RUN_NIGHTLY=0 11:17:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.676 11:17:22 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:57.676 11:17:22 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.676 11:17:22 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.676 11:17:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.676 11:17:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.676 11:17:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.676 11:17:22 -- paths/export.sh@5 -- $ export PATH 00:08:57.676 11:17:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.676 11:17:22 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:08:57.676 11:17:22 -- common/autobuild_common.sh@437 -- $ date +%s 00:08:57.676 11:17:22 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718011042.XXXXXX 00:08:57.676 11:17:22 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718011042.fjnaiG 00:08:57.676 11:17:22 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:08:57.676 11:17:22 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:08:57.676 11:17:22 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:08:57.676 11:17:22 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:08:57.676 11:17:22 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:08:57.676 11:17:22 -- common/autobuild_common.sh@453 -- $ get_config_params 00:08:57.676 11:17:22 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:08:57.676 11:17:22 -- common/autotest_common.sh@10 -- $ set +x 00:08:57.676 11:17:22 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:08:57.676 11:17:22 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:08:57.676 11:17:22 -- pm/common@17 -- $ local monitor 00:08:57.676 11:17:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:57.676 11:17:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:57.676 11:17:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:57.676 11:17:22 -- pm/common@21 -- $ date +%s 00:08:57.676 11:17:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:57.676 11:17:22 -- pm/common@21 -- $ date +%s 00:08:57.676 11:17:22 -- pm/common@25 -- $ sleep 1 00:08:57.676 11:17:22 -- pm/common@21 -- $ date +%s 00:08:57.676 11:17:22 -- pm/common@21 -- $ date +%s 00:08:57.676 11:17:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718011042 00:08:57.676 11:17:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718011042 00:08:57.676 11:17:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718011042 00:08:57.676 11:17:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718011042 00:08:57.676 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718011042_collect-cpu-load.pm.log 00:08:57.676 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718011042_collect-vmstat.pm.log 00:08:57.676 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718011042_collect-cpu-temp.pm.log 00:08:57.676 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718011042_collect-bmc-pm.bmc.pm.log 00:08:58.613 11:17:23 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:08:58.613 11:17:23 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:08:58.613 11:17:23 -- spdk/autobuild.sh@12 -- $ umask 022 00:08:58.613 11:17:23 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:58.613 11:17:23 -- spdk/autobuild.sh@16 -- $ date -u 00:08:58.613 Mon Jun 10 09:17:23 AM UTC 2024 00:08:58.613 11:17:23 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:08:58.613 v24.09-pre-57-g1e8a0c991 00:08:58.613 11:17:23 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:08:58.613 11:17:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:08:58.613 11:17:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:08:58.613 11:17:23 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:08:58.613 11:17:23 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:08:58.613 11:17:23 -- common/autotest_common.sh@10 -- $ set +x 00:08:58.613 ************************************ 00:08:58.613 START TEST ubsan 00:08:58.613 ************************************ 00:08:58.613 11:17:23 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:08:58.613 using ubsan 00:08:58.613 00:08:58.613 real 0m0.001s 00:08:58.613 user 0m0.000s 00:08:58.613 sys 0m0.000s 00:08:58.613 11:17:23 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:08:58.613 11:17:23 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:08:58.613 ************************************ 00:08:58.613 END TEST ubsan 00:08:58.613 ************************************ 00:08:58.613 11:17:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:08:58.613 11:17:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:08:58.613 11:17:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:08:58.613 11:17:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:08:58.613 11:17:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:08:58.613 11:17:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:08:58.613 11:17:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:08:58.613 11:17:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:08:58.613 11:17:23 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:08:58.872 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:58.872 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:59.132 Using 'verbs' RDMA provider 00:09:14.952 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:09:29.841 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:09:29.841 Creating mk/config.mk...done. 00:09:29.841 Creating mk/cc.flags.mk...done. 00:09:29.841 Type 'make' to build. 00:09:29.841 11:17:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:09:29.841 11:17:53 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:09:29.841 11:17:53 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:09:29.841 11:17:53 -- common/autotest_common.sh@10 -- $ set +x 00:09:29.841 ************************************ 00:09:29.841 START TEST make 00:09:29.841 ************************************ 00:09:29.841 11:17:53 make -- common/autotest_common.sh@1124 -- $ make -j112 00:09:29.841 make[1]: Nothing to be done for 'all'. 00:09:30.406 The Meson build system 00:09:30.406 Version: 1.3.1 00:09:30.406 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:09:30.406 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:09:30.406 Build type: native build 00:09:30.406 Project name: libvfio-user 00:09:30.406 Project version: 0.0.1 00:09:30.406 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:09:30.406 C linker for the host machine: cc ld.bfd 2.39-16 00:09:30.406 Host machine cpu family: x86_64 00:09:30.406 Host machine cpu: x86_64 00:09:30.406 Run-time dependency threads found: YES 00:09:30.406 Library dl found: YES 00:09:30.406 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:09:30.406 Run-time dependency json-c found: YES 0.17 00:09:30.406 Run-time dependency cmocka found: YES 1.1.7 00:09:30.406 Program pytest-3 found: NO 00:09:30.406 Program flake8 found: NO 00:09:30.406 Program misspell-fixer found: NO 00:09:30.406 Program restructuredtext-lint found: NO 00:09:30.406 Program valgrind found: YES (/usr/bin/valgrind) 00:09:30.406 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:09:30.406 Compiler for C supports arguments -Wmissing-declarations: YES 00:09:30.406 Compiler for C supports arguments -Wwrite-strings: YES 00:09:30.406 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:09:30.406 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:09:30.406 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:09:30.406 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:09:30.406 Build targets in project: 8 00:09:30.406 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:09:30.406 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:09:30.406 00:09:30.406 libvfio-user 0.0.1 00:09:30.406 00:09:30.406 User defined options 00:09:30.406 buildtype : debug 00:09:30.406 default_library: shared 00:09:30.406 libdir : /usr/local/lib 00:09:30.406 00:09:30.406 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:09:30.664 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:09:30.921 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:09:30.921 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:09:30.921 [3/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:09:30.921 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:09:30.921 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:09:30.921 [6/37] Compiling C object samples/null.p/null.c.o 00:09:30.921 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:09:30.921 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:09:30.921 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:09:30.921 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:09:30.921 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:09:30.921 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:09:30.921 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:09:30.921 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:09:30.921 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:09:30.921 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:09:30.921 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:09:30.921 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:09:30.921 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:09:30.921 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:09:30.921 [21/37] Compiling C object samples/client.p/client.c.o 00:09:30.921 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:09:30.921 [23/37] Compiling C object samples/server.p/server.c.o 00:09:30.921 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:09:30.921 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:09:30.921 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:09:30.921 [27/37] Linking target samples/client 00:09:30.921 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:09:30.921 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:09:31.179 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:09:31.179 [31/37] Linking target test/unit_tests 00:09:31.179 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:09:31.179 [33/37] Linking target samples/shadow_ioeventfd_server 00:09:31.179 [34/37] Linking target samples/server 00:09:31.179 [35/37] Linking target samples/lspci 00:09:31.179 [36/37] Linking target samples/null 00:09:31.179 [37/37] Linking target samples/gpio-pci-idio-16 00:09:31.179 INFO: autodetecting backend as ninja 00:09:31.179 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:09:31.179 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:09:31.746 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:09:31.746 ninja: no work to do. 00:09:38.347 The Meson build system 00:09:38.347 Version: 1.3.1 00:09:38.347 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:09:38.347 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:09:38.347 Build type: native build 00:09:38.347 Program cat found: YES (/usr/bin/cat) 00:09:38.347 Project name: DPDK 00:09:38.347 Project version: 24.03.0 00:09:38.347 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:09:38.347 C linker for the host machine: cc ld.bfd 2.39-16 00:09:38.347 Host machine cpu family: x86_64 00:09:38.347 Host machine cpu: x86_64 00:09:38.347 Message: ## Building in Developer Mode ## 00:09:38.347 Program pkg-config found: YES (/usr/bin/pkg-config) 00:09:38.347 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:09:38.347 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:09:38.347 Program python3 found: YES (/usr/bin/python3) 00:09:38.347 Program cat found: YES (/usr/bin/cat) 00:09:38.347 Compiler for C supports arguments -march=native: YES 00:09:38.347 Checking for size of "void *" : 8 00:09:38.347 Checking for size of "void *" : 8 (cached) 00:09:38.347 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:09:38.347 Library m found: YES 00:09:38.347 Library numa found: YES 00:09:38.347 Has header "numaif.h" : YES 00:09:38.347 Library fdt found: NO 00:09:38.347 Library execinfo found: NO 00:09:38.347 Has header "execinfo.h" : YES 00:09:38.347 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:09:38.347 Run-time dependency libarchive found: NO (tried pkgconfig) 00:09:38.347 Run-time dependency libbsd found: NO (tried pkgconfig) 00:09:38.347 Run-time dependency jansson found: NO (tried pkgconfig) 00:09:38.347 Run-time dependency openssl found: YES 3.0.9 00:09:38.347 Run-time dependency libpcap found: YES 1.10.4 00:09:38.347 Has header "pcap.h" with dependency libpcap: YES 00:09:38.347 Compiler for C supports arguments -Wcast-qual: YES 00:09:38.347 Compiler for C supports arguments -Wdeprecated: YES 00:09:38.347 Compiler for C supports arguments -Wformat: YES 00:09:38.347 Compiler for C supports arguments -Wformat-nonliteral: NO 00:09:38.347 Compiler for C supports arguments -Wformat-security: NO 00:09:38.347 Compiler for C supports arguments -Wmissing-declarations: YES 00:09:38.347 Compiler for C supports arguments -Wmissing-prototypes: YES 00:09:38.347 Compiler for C supports arguments -Wnested-externs: YES 00:09:38.347 Compiler for C supports arguments -Wold-style-definition: YES 00:09:38.347 Compiler for C supports arguments -Wpointer-arith: YES 00:09:38.347 Compiler for C supports arguments -Wsign-compare: YES 00:09:38.347 Compiler for C supports arguments -Wstrict-prototypes: YES 00:09:38.347 Compiler for C supports arguments -Wundef: YES 00:09:38.347 Compiler for C supports arguments -Wwrite-strings: YES 00:09:38.347 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:09:38.347 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:09:38.347 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:09:38.347 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:09:38.347 Program objdump found: YES (/usr/bin/objdump) 00:09:38.347 Compiler for C supports arguments -mavx512f: YES 00:09:38.347 Checking if "AVX512 checking" compiles: YES 00:09:38.347 Fetching value of define "__SSE4_2__" : 1 00:09:38.347 Fetching value of define "__AES__" : 1 00:09:38.347 Fetching value of define "__AVX__" : 1 00:09:38.347 Fetching value of define "__AVX2__" : 1 00:09:38.347 Fetching value of define "__AVX512BW__" : 1 00:09:38.347 Fetching value of define "__AVX512CD__" : 1 00:09:38.347 Fetching value of define "__AVX512DQ__" : 1 00:09:38.347 Fetching value of define "__AVX512F__" : 1 00:09:38.347 Fetching value of define "__AVX512VL__" : 1 00:09:38.347 Fetching value of define "__PCLMUL__" : 1 00:09:38.347 Fetching value of define "__RDRND__" : 1 00:09:38.347 Fetching value of define "__RDSEED__" : 1 00:09:38.347 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:09:38.348 Fetching value of define "__znver1__" : (undefined) 00:09:38.348 Fetching value of define "__znver2__" : (undefined) 00:09:38.348 Fetching value of define "__znver3__" : (undefined) 00:09:38.348 Fetching value of define "__znver4__" : (undefined) 00:09:38.348 Compiler for C supports arguments -Wno-format-truncation: YES 00:09:38.348 Message: lib/log: Defining dependency "log" 00:09:38.348 Message: lib/kvargs: Defining dependency "kvargs" 00:09:38.348 Message: lib/telemetry: Defining dependency "telemetry" 00:09:38.348 Checking for function "getentropy" : NO 00:09:38.348 Message: lib/eal: Defining dependency "eal" 00:09:38.348 Message: lib/ring: Defining dependency "ring" 00:09:38.348 Message: lib/rcu: Defining dependency "rcu" 00:09:38.348 Message: lib/mempool: Defining dependency "mempool" 00:09:38.348 Message: lib/mbuf: Defining dependency "mbuf" 00:09:38.348 Fetching value of define "__PCLMUL__" : 1 (cached) 00:09:38.348 Fetching value of define "__AVX512F__" : 1 (cached) 00:09:38.348 Fetching value of define "__AVX512BW__" : 1 (cached) 00:09:38.348 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:09:38.348 Fetching value of define "__AVX512VL__" : 1 (cached) 00:09:38.348 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:09:38.348 Compiler for C supports arguments -mpclmul: YES 00:09:38.348 Compiler for C supports arguments -maes: YES 00:09:38.348 Compiler for C supports arguments -mavx512f: YES (cached) 00:09:38.348 Compiler for C supports arguments -mavx512bw: YES 00:09:38.348 Compiler for C supports arguments -mavx512dq: YES 00:09:38.348 Compiler for C supports arguments -mavx512vl: YES 00:09:38.348 Compiler for C supports arguments -mvpclmulqdq: YES 00:09:38.348 Compiler for C supports arguments -mavx2: YES 00:09:38.348 Compiler for C supports arguments -mavx: YES 00:09:38.348 Message: lib/net: Defining dependency "net" 00:09:38.348 Message: lib/meter: Defining dependency "meter" 00:09:38.348 Message: lib/ethdev: Defining dependency "ethdev" 00:09:38.348 Message: lib/pci: Defining dependency "pci" 00:09:38.348 Message: lib/cmdline: Defining dependency "cmdline" 00:09:38.348 Message: lib/hash: Defining dependency "hash" 00:09:38.348 Message: lib/timer: Defining dependency "timer" 00:09:38.348 Message: lib/compressdev: Defining dependency "compressdev" 00:09:38.348 Message: lib/cryptodev: Defining dependency "cryptodev" 00:09:38.348 Message: lib/dmadev: Defining dependency "dmadev" 00:09:38.348 Compiler for C supports arguments -Wno-cast-qual: YES 00:09:38.348 Message: lib/power: Defining dependency "power" 00:09:38.348 Message: lib/reorder: Defining dependency "reorder" 00:09:38.348 Message: lib/security: Defining dependency "security" 00:09:38.348 Has header "linux/userfaultfd.h" : YES 00:09:38.348 Has header "linux/vduse.h" : YES 00:09:38.348 Message: lib/vhost: Defining dependency "vhost" 00:09:38.348 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:09:38.348 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:09:38.348 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:09:38.348 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:09:38.348 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:09:38.348 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:09:38.348 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:09:38.348 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:09:38.348 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:09:38.348 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:09:38.348 Program doxygen found: YES (/usr/bin/doxygen) 00:09:38.348 Configuring doxy-api-html.conf using configuration 00:09:38.348 Configuring doxy-api-man.conf using configuration 00:09:38.348 Program mandb found: YES (/usr/bin/mandb) 00:09:38.348 Program sphinx-build found: NO 00:09:38.348 Configuring rte_build_config.h using configuration 00:09:38.348 Message: 00:09:38.348 ================= 00:09:38.348 Applications Enabled 00:09:38.348 ================= 00:09:38.348 00:09:38.348 apps: 00:09:38.348 00:09:38.348 00:09:38.348 Message: 00:09:38.348 ================= 00:09:38.348 Libraries Enabled 00:09:38.348 ================= 00:09:38.348 00:09:38.348 libs: 00:09:38.348 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:09:38.348 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:09:38.348 cryptodev, dmadev, power, reorder, security, vhost, 00:09:38.348 00:09:38.348 Message: 00:09:38.348 =============== 00:09:38.348 Drivers Enabled 00:09:38.348 =============== 00:09:38.348 00:09:38.348 common: 00:09:38.348 00:09:38.348 bus: 00:09:38.348 pci, vdev, 00:09:38.348 mempool: 00:09:38.348 ring, 00:09:38.348 dma: 00:09:38.348 00:09:38.348 net: 00:09:38.348 00:09:38.348 crypto: 00:09:38.348 00:09:38.348 compress: 00:09:38.348 00:09:38.348 vdpa: 00:09:38.348 00:09:38.348 00:09:38.348 Message: 00:09:38.348 ================= 00:09:38.348 Content Skipped 00:09:38.348 ================= 00:09:38.348 00:09:38.348 apps: 00:09:38.348 dumpcap: explicitly disabled via build config 00:09:38.348 graph: explicitly disabled via build config 00:09:38.348 pdump: explicitly disabled via build config 00:09:38.348 proc-info: explicitly disabled via build config 00:09:38.348 test-acl: explicitly disabled via build config 00:09:38.348 test-bbdev: explicitly disabled via build config 00:09:38.348 test-cmdline: explicitly disabled via build config 00:09:38.348 test-compress-perf: explicitly disabled via build config 00:09:38.348 test-crypto-perf: explicitly disabled via build config 00:09:38.348 test-dma-perf: explicitly disabled via build config 00:09:38.348 test-eventdev: explicitly disabled via build config 00:09:38.348 test-fib: explicitly disabled via build config 00:09:38.348 test-flow-perf: explicitly disabled via build config 00:09:38.348 test-gpudev: explicitly disabled via build config 00:09:38.348 test-mldev: explicitly disabled via build config 00:09:38.348 test-pipeline: explicitly disabled via build config 00:09:38.348 test-pmd: explicitly disabled via build config 00:09:38.348 test-regex: explicitly disabled via build config 00:09:38.348 test-sad: explicitly disabled via build config 00:09:38.348 test-security-perf: explicitly disabled via build config 00:09:38.348 00:09:38.348 libs: 00:09:38.348 argparse: explicitly disabled via build config 00:09:38.348 metrics: explicitly disabled via build config 00:09:38.348 acl: explicitly disabled via build config 00:09:38.348 bbdev: explicitly disabled via build config 00:09:38.348 bitratestats: explicitly disabled via build config 00:09:38.348 bpf: explicitly disabled via build config 00:09:38.348 cfgfile: explicitly disabled via build config 00:09:38.348 distributor: explicitly disabled via build config 00:09:38.348 efd: explicitly disabled via build config 00:09:38.348 eventdev: explicitly disabled via build config 00:09:38.348 dispatcher: explicitly disabled via build config 00:09:38.348 gpudev: explicitly disabled via build config 00:09:38.348 gro: explicitly disabled via build config 00:09:38.348 gso: explicitly disabled via build config 00:09:38.348 ip_frag: explicitly disabled via build config 00:09:38.348 jobstats: explicitly disabled via build config 00:09:38.348 latencystats: explicitly disabled via build config 00:09:38.348 lpm: explicitly disabled via build config 00:09:38.348 member: explicitly disabled via build config 00:09:38.348 pcapng: explicitly disabled via build config 00:09:38.348 rawdev: explicitly disabled via build config 00:09:38.348 regexdev: explicitly disabled via build config 00:09:38.348 mldev: explicitly disabled via build config 00:09:38.348 rib: explicitly disabled via build config 00:09:38.348 sched: explicitly disabled via build config 00:09:38.348 stack: explicitly disabled via build config 00:09:38.348 ipsec: explicitly disabled via build config 00:09:38.348 pdcp: explicitly disabled via build config 00:09:38.348 fib: explicitly disabled via build config 00:09:38.348 port: explicitly disabled via build config 00:09:38.348 pdump: explicitly disabled via build config 00:09:38.348 table: explicitly disabled via build config 00:09:38.348 pipeline: explicitly disabled via build config 00:09:38.348 graph: explicitly disabled via build config 00:09:38.348 node: explicitly disabled via build config 00:09:38.348 00:09:38.348 drivers: 00:09:38.348 common/cpt: not in enabled drivers build config 00:09:38.348 common/dpaax: not in enabled drivers build config 00:09:38.348 common/iavf: not in enabled drivers build config 00:09:38.348 common/idpf: not in enabled drivers build config 00:09:38.348 common/ionic: not in enabled drivers build config 00:09:38.348 common/mvep: not in enabled drivers build config 00:09:38.348 common/octeontx: not in enabled drivers build config 00:09:38.348 bus/auxiliary: not in enabled drivers build config 00:09:38.348 bus/cdx: not in enabled drivers build config 00:09:38.348 bus/dpaa: not in enabled drivers build config 00:09:38.348 bus/fslmc: not in enabled drivers build config 00:09:38.348 bus/ifpga: not in enabled drivers build config 00:09:38.349 bus/platform: not in enabled drivers build config 00:09:38.349 bus/uacce: not in enabled drivers build config 00:09:38.349 bus/vmbus: not in enabled drivers build config 00:09:38.349 common/cnxk: not in enabled drivers build config 00:09:38.349 common/mlx5: not in enabled drivers build config 00:09:38.349 common/nfp: not in enabled drivers build config 00:09:38.349 common/nitrox: not in enabled drivers build config 00:09:38.349 common/qat: not in enabled drivers build config 00:09:38.349 common/sfc_efx: not in enabled drivers build config 00:09:38.349 mempool/bucket: not in enabled drivers build config 00:09:38.349 mempool/cnxk: not in enabled drivers build config 00:09:38.349 mempool/dpaa: not in enabled drivers build config 00:09:38.349 mempool/dpaa2: not in enabled drivers build config 00:09:38.349 mempool/octeontx: not in enabled drivers build config 00:09:38.349 mempool/stack: not in enabled drivers build config 00:09:38.349 dma/cnxk: not in enabled drivers build config 00:09:38.349 dma/dpaa: not in enabled drivers build config 00:09:38.349 dma/dpaa2: not in enabled drivers build config 00:09:38.349 dma/hisilicon: not in enabled drivers build config 00:09:38.349 dma/idxd: not in enabled drivers build config 00:09:38.349 dma/ioat: not in enabled drivers build config 00:09:38.349 dma/skeleton: not in enabled drivers build config 00:09:38.349 net/af_packet: not in enabled drivers build config 00:09:38.349 net/af_xdp: not in enabled drivers build config 00:09:38.349 net/ark: not in enabled drivers build config 00:09:38.349 net/atlantic: not in enabled drivers build config 00:09:38.349 net/avp: not in enabled drivers build config 00:09:38.349 net/axgbe: not in enabled drivers build config 00:09:38.349 net/bnx2x: not in enabled drivers build config 00:09:38.349 net/bnxt: not in enabled drivers build config 00:09:38.349 net/bonding: not in enabled drivers build config 00:09:38.349 net/cnxk: not in enabled drivers build config 00:09:38.349 net/cpfl: not in enabled drivers build config 00:09:38.349 net/cxgbe: not in enabled drivers build config 00:09:38.349 net/dpaa: not in enabled drivers build config 00:09:38.349 net/dpaa2: not in enabled drivers build config 00:09:38.349 net/e1000: not in enabled drivers build config 00:09:38.349 net/ena: not in enabled drivers build config 00:09:38.349 net/enetc: not in enabled drivers build config 00:09:38.349 net/enetfec: not in enabled drivers build config 00:09:38.349 net/enic: not in enabled drivers build config 00:09:38.349 net/failsafe: not in enabled drivers build config 00:09:38.349 net/fm10k: not in enabled drivers build config 00:09:38.349 net/gve: not in enabled drivers build config 00:09:38.349 net/hinic: not in enabled drivers build config 00:09:38.349 net/hns3: not in enabled drivers build config 00:09:38.349 net/i40e: not in enabled drivers build config 00:09:38.349 net/iavf: not in enabled drivers build config 00:09:38.349 net/ice: not in enabled drivers build config 00:09:38.349 net/idpf: not in enabled drivers build config 00:09:38.349 net/igc: not in enabled drivers build config 00:09:38.349 net/ionic: not in enabled drivers build config 00:09:38.349 net/ipn3ke: not in enabled drivers build config 00:09:38.349 net/ixgbe: not in enabled drivers build config 00:09:38.349 net/mana: not in enabled drivers build config 00:09:38.349 net/memif: not in enabled drivers build config 00:09:38.349 net/mlx4: not in enabled drivers build config 00:09:38.349 net/mlx5: not in enabled drivers build config 00:09:38.349 net/mvneta: not in enabled drivers build config 00:09:38.349 net/mvpp2: not in enabled drivers build config 00:09:38.349 net/netvsc: not in enabled drivers build config 00:09:38.349 net/nfb: not in enabled drivers build config 00:09:38.349 net/nfp: not in enabled drivers build config 00:09:38.349 net/ngbe: not in enabled drivers build config 00:09:38.349 net/null: not in enabled drivers build config 00:09:38.349 net/octeontx: not in enabled drivers build config 00:09:38.349 net/octeon_ep: not in enabled drivers build config 00:09:38.349 net/pcap: not in enabled drivers build config 00:09:38.349 net/pfe: not in enabled drivers build config 00:09:38.349 net/qede: not in enabled drivers build config 00:09:38.349 net/ring: not in enabled drivers build config 00:09:38.349 net/sfc: not in enabled drivers build config 00:09:38.349 net/softnic: not in enabled drivers build config 00:09:38.349 net/tap: not in enabled drivers build config 00:09:38.349 net/thunderx: not in enabled drivers build config 00:09:38.349 net/txgbe: not in enabled drivers build config 00:09:38.349 net/vdev_netvsc: not in enabled drivers build config 00:09:38.349 net/vhost: not in enabled drivers build config 00:09:38.349 net/virtio: not in enabled drivers build config 00:09:38.349 net/vmxnet3: not in enabled drivers build config 00:09:38.349 raw/*: missing internal dependency, "rawdev" 00:09:38.349 crypto/armv8: not in enabled drivers build config 00:09:38.349 crypto/bcmfs: not in enabled drivers build config 00:09:38.349 crypto/caam_jr: not in enabled drivers build config 00:09:38.349 crypto/ccp: not in enabled drivers build config 00:09:38.349 crypto/cnxk: not in enabled drivers build config 00:09:38.349 crypto/dpaa_sec: not in enabled drivers build config 00:09:38.349 crypto/dpaa2_sec: not in enabled drivers build config 00:09:38.349 crypto/ipsec_mb: not in enabled drivers build config 00:09:38.349 crypto/mlx5: not in enabled drivers build config 00:09:38.349 crypto/mvsam: not in enabled drivers build config 00:09:38.349 crypto/nitrox: not in enabled drivers build config 00:09:38.349 crypto/null: not in enabled drivers build config 00:09:38.349 crypto/octeontx: not in enabled drivers build config 00:09:38.349 crypto/openssl: not in enabled drivers build config 00:09:38.349 crypto/scheduler: not in enabled drivers build config 00:09:38.349 crypto/uadk: not in enabled drivers build config 00:09:38.349 crypto/virtio: not in enabled drivers build config 00:09:38.349 compress/isal: not in enabled drivers build config 00:09:38.349 compress/mlx5: not in enabled drivers build config 00:09:38.349 compress/nitrox: not in enabled drivers build config 00:09:38.349 compress/octeontx: not in enabled drivers build config 00:09:38.349 compress/zlib: not in enabled drivers build config 00:09:38.349 regex/*: missing internal dependency, "regexdev" 00:09:38.349 ml/*: missing internal dependency, "mldev" 00:09:38.349 vdpa/ifc: not in enabled drivers build config 00:09:38.349 vdpa/mlx5: not in enabled drivers build config 00:09:38.349 vdpa/nfp: not in enabled drivers build config 00:09:38.349 vdpa/sfc: not in enabled drivers build config 00:09:38.349 event/*: missing internal dependency, "eventdev" 00:09:38.349 baseband/*: missing internal dependency, "bbdev" 00:09:38.349 gpu/*: missing internal dependency, "gpudev" 00:09:38.349 00:09:38.349 00:09:38.349 Build targets in project: 85 00:09:38.349 00:09:38.349 DPDK 24.03.0 00:09:38.349 00:09:38.349 User defined options 00:09:38.349 buildtype : debug 00:09:38.349 default_library : shared 00:09:38.349 libdir : lib 00:09:38.349 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:38.349 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:09:38.349 c_link_args : 00:09:38.349 cpu_instruction_set: native 00:09:38.349 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:09:38.349 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:09:38.349 enable_docs : false 00:09:38.349 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:09:38.349 enable_kmods : false 00:09:38.349 tests : false 00:09:38.349 00:09:38.349 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:09:38.349 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:09:38.349 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:09:38.349 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:09:38.349 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:09:38.349 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:09:38.349 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:09:38.349 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:09:38.349 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:09:38.349 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:09:38.349 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:09:38.349 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:09:38.641 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:09:38.641 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:09:38.641 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:09:38.641 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:09:38.641 [15/268] Linking static target lib/librte_kvargs.a 00:09:38.641 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:09:38.641 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:09:38.641 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:09:38.641 [19/268] Linking static target lib/librte_log.a 00:09:38.641 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:09:38.641 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:09:38.641 [22/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:09:38.641 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:09:38.641 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:09:38.641 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:09:38.641 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:09:38.641 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:09:38.641 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:09:38.641 [29/268] Linking static target lib/librte_pci.a 00:09:38.641 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:09:38.641 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:09:38.641 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:09:38.910 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:09:38.911 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:09:38.911 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:09:38.911 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:09:38.911 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:09:38.911 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:09:38.911 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:09:38.911 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:09:38.911 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:09:38.911 [42/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:09:38.911 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:09:38.911 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:09:38.911 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:09:38.911 [46/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:09:38.911 [47/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:09:38.911 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:09:38.911 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:09:38.911 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:09:38.911 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:09:38.911 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:09:38.911 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:09:38.911 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:09:38.911 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:09:38.911 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:09:38.911 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:09:39.169 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:09:39.169 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:09:39.169 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:09:39.169 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:09:39.169 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:09:39.169 [63/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:09:39.169 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:09:39.169 [65/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:09:39.169 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:09:39.169 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:09:39.169 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:09:39.169 [69/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:09:39.169 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:09:39.169 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:09:39.170 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:09:39.170 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:09:39.170 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:09:39.170 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:09:39.170 [76/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:09:39.170 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:09:39.170 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:09:39.170 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:09:39.170 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:09:39.170 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:09:39.170 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:09:39.170 [83/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:09:39.170 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:09:39.170 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:09:39.170 [86/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:09:39.170 [87/268] Linking static target lib/librte_meter.a 00:09:39.170 [88/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.170 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:09:39.170 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:09:39.170 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:09:39.170 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:09:39.170 [93/268] Linking static target lib/librte_ring.a 00:09:39.170 [94/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:09:39.170 [95/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:09:39.170 [96/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.170 [97/268] Linking static target lib/librte_telemetry.a 00:09:39.170 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:09:39.170 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:09:39.170 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:09:39.170 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:09:39.170 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:09:39.170 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:09:39.170 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:09:39.170 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:09:39.170 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:09:39.170 [107/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:09:39.170 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:09:39.170 [109/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:09:39.170 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:09:39.170 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:09:39.170 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:09:39.170 [113/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:09:39.170 [114/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:09:39.170 [115/268] Linking static target lib/librte_cmdline.a 00:09:39.170 [116/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:09:39.170 [117/268] Linking static target lib/librte_mempool.a 00:09:39.170 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:09:39.170 [119/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:09:39.170 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:09:39.170 [121/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:09:39.170 [122/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:09:39.170 [123/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:09:39.170 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:09:39.170 [125/268] Linking static target lib/librte_rcu.a 00:09:39.170 [126/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:09:39.170 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:09:39.170 [128/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:09:39.170 [129/268] Linking static target lib/librte_net.a 00:09:39.170 [130/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:09:39.170 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:09:39.170 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:09:39.170 [133/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:09:39.170 [134/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:09:39.170 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:09:39.170 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:09:39.170 [137/268] Linking static target lib/librte_eal.a 00:09:39.170 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:09:39.170 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:09:39.170 [140/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:09:39.429 [141/268] Linking static target lib/librte_timer.a 00:09:39.429 [142/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:09:39.429 [143/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:09:39.429 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:09:39.429 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:09:39.429 [146/268] Linking static target lib/librte_compressdev.a 00:09:39.429 [147/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:09:39.429 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:09:39.429 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:09:39.429 [150/268] Linking static target lib/librte_dmadev.a 00:09:39.429 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:09:39.429 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:09:39.429 [153/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.429 [154/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:09:39.429 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:09:39.429 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:09:39.429 [157/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.429 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:09:39.429 [159/268] Linking target lib/librte_log.so.24.1 00:09:39.429 [160/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:09:39.429 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:09:39.429 [162/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:09:39.429 [163/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:09:39.429 [164/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:09:39.429 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:09:39.429 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:09:39.429 [167/268] Linking static target lib/librte_mbuf.a 00:09:39.429 [168/268] Linking static target lib/librte_power.a 00:09:39.429 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:09:39.429 [170/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.688 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:09:39.688 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:09:39.688 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:09:39.688 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:09:39.688 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:09:39.688 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:09:39.688 [177/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.688 [178/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:09:39.688 [179/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:09:39.688 [180/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:09:39.688 [181/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:09:39.688 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:09:39.688 [183/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:09:39.688 [184/268] Linking static target lib/librte_security.a 00:09:39.688 [185/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.688 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:09:39.688 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:09:39.688 [188/268] Linking static target lib/librte_reorder.a 00:09:39.688 [189/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:09:39.688 [190/268] Linking target lib/librte_kvargs.so.24.1 00:09:39.688 [191/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:09:39.688 [192/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:09:39.688 [193/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.688 [194/268] Linking static target lib/librte_hash.a 00:09:39.688 [195/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:39.688 [196/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:39.688 [197/268] Linking static target drivers/librte_bus_vdev.a 00:09:39.688 [198/268] Linking target lib/librte_telemetry.so.24.1 00:09:39.688 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:09:39.688 [200/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:09:39.947 [201/268] Linking static target lib/librte_cryptodev.a 00:09:39.947 [202/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:09:39.947 [203/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:09:39.947 [204/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:09:39.947 [205/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:09:39.947 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:09:39.947 [207/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:39.947 [208/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:39.947 [209/268] Linking static target drivers/librte_mempool_ring.a 00:09:39.947 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:39.947 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:39.947 [212/268] Linking static target drivers/librte_bus_pci.a 00:09:39.947 [213/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:09:40.205 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:40.205 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:40.205 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:09:40.205 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:40.205 [218/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:09:40.205 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:09:40.205 [220/268] Linking static target lib/librte_ethdev.a 00:09:40.464 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:09:40.464 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:09:40.464 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:09:40.464 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:09:40.723 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:09:40.723 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:40.723 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:09:41.657 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:09:41.657 [229/268] Linking static target lib/librte_vhost.a 00:09:42.225 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:44.131 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:09:50.701 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:52.607 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:09:52.607 [234/268] Linking target lib/librte_eal.so.24.1 00:09:52.866 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:09:52.866 [236/268] Linking target lib/librte_dmadev.so.24.1 00:09:52.866 [237/268] Linking target lib/librte_meter.so.24.1 00:09:52.866 [238/268] Linking target lib/librte_ring.so.24.1 00:09:52.866 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:09:52.866 [240/268] Linking target lib/librte_pci.so.24.1 00:09:52.866 [241/268] Linking target lib/librte_timer.so.24.1 00:09:53.124 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:09:53.124 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:09:53.124 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:09:53.124 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:09:53.124 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:09:53.124 [247/268] Linking target lib/librte_rcu.so.24.1 00:09:53.124 [248/268] Linking target lib/librte_mempool.so.24.1 00:09:53.124 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:09:53.124 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:09:53.124 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:09:53.383 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:09:53.383 [253/268] Linking target lib/librte_mbuf.so.24.1 00:09:53.383 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:09:53.642 [255/268] Linking target lib/librte_net.so.24.1 00:09:53.642 [256/268] Linking target lib/librte_compressdev.so.24.1 00:09:53.642 [257/268] Linking target lib/librte_reorder.so.24.1 00:09:53.642 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:09:53.642 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:09:53.642 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:09:53.642 [261/268] Linking target lib/librte_hash.so.24.1 00:09:53.642 [262/268] Linking target lib/librte_security.so.24.1 00:09:53.642 [263/268] Linking target lib/librte_cmdline.so.24.1 00:09:53.642 [264/268] Linking target lib/librte_ethdev.so.24.1 00:09:53.901 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:09:53.901 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:09:53.901 [267/268] Linking target lib/librte_power.so.24.1 00:09:53.901 [268/268] Linking target lib/librte_vhost.so.24.1 00:09:53.901 INFO: autodetecting backend as ninja 00:09:53.901 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:09:55.278 CC lib/ut/ut.o 00:09:55.278 CC lib/log/log.o 00:09:55.278 CC lib/log/log_flags.o 00:09:55.278 CC lib/log/log_deprecated.o 00:09:55.278 CC lib/ut_mock/mock.o 00:09:55.537 LIB libspdk_log.a 00:09:55.537 LIB libspdk_ut.a 00:09:55.537 LIB libspdk_ut_mock.a 00:09:55.537 SO libspdk_log.so.7.0 00:09:55.537 SO libspdk_ut.so.2.0 00:09:55.537 SO libspdk_ut_mock.so.6.0 00:09:55.537 SYMLINK libspdk_ut.so 00:09:55.537 SYMLINK libspdk_log.so 00:09:55.537 SYMLINK libspdk_ut_mock.so 00:09:55.796 CC lib/util/base64.o 00:09:55.796 CC lib/dma/dma.o 00:09:55.796 CC lib/util/bit_array.o 00:09:55.796 CC lib/util/cpuset.o 00:09:55.796 CC lib/util/crc16.o 00:09:55.796 CC lib/util/crc32c.o 00:09:55.796 CC lib/util/crc32.o 00:09:55.796 CC lib/util/crc32_ieee.o 00:09:55.796 CC lib/util/crc64.o 00:09:55.796 CC lib/util/dif.o 00:09:55.796 CC lib/util/fd.o 00:09:55.796 CXX lib/trace_parser/trace.o 00:09:55.796 CC lib/util/file.o 00:09:55.796 CC lib/util/hexlify.o 00:09:55.796 CC lib/util/iov.o 00:09:55.796 CC lib/util/math.o 00:09:55.796 CC lib/util/pipe.o 00:09:55.796 CC lib/ioat/ioat.o 00:09:55.796 CC lib/util/strerror_tls.o 00:09:55.796 CC lib/util/string.o 00:09:55.796 CC lib/util/uuid.o 00:09:56.080 CC lib/util/fd_group.o 00:09:56.080 CC lib/util/xor.o 00:09:56.080 CC lib/util/zipf.o 00:09:56.080 CC lib/vfio_user/host/vfio_user_pci.o 00:09:56.080 CC lib/vfio_user/host/vfio_user.o 00:09:56.080 LIB libspdk_dma.a 00:09:56.080 SO libspdk_dma.so.4.0 00:09:56.339 LIB libspdk_ioat.a 00:09:56.339 SO libspdk_ioat.so.7.0 00:09:56.339 SYMLINK libspdk_dma.so 00:09:56.339 SYMLINK libspdk_ioat.so 00:09:56.339 LIB libspdk_vfio_user.a 00:09:56.339 SO libspdk_vfio_user.so.5.0 00:09:56.339 LIB libspdk_util.a 00:09:56.597 SYMLINK libspdk_vfio_user.so 00:09:56.597 SO libspdk_util.so.9.0 00:09:56.597 SYMLINK libspdk_util.so 00:09:56.857 LIB libspdk_trace_parser.a 00:09:56.857 SO libspdk_trace_parser.so.5.0 00:09:56.857 SYMLINK libspdk_trace_parser.so 00:09:57.116 CC lib/json/json_parse.o 00:09:57.116 CC lib/json/json_util.o 00:09:57.116 CC lib/json/json_write.o 00:09:57.116 CC lib/vmd/vmd.o 00:09:57.116 CC lib/vmd/led.o 00:09:57.116 CC lib/rdma/common.o 00:09:57.116 CC lib/idxd/idxd.o 00:09:57.116 CC lib/rdma/rdma_verbs.o 00:09:57.116 CC lib/idxd/idxd_user.o 00:09:57.116 CC lib/idxd/idxd_kernel.o 00:09:57.116 CC lib/conf/conf.o 00:09:57.116 CC lib/env_dpdk/env.o 00:09:57.116 CC lib/env_dpdk/memory.o 00:09:57.116 CC lib/env_dpdk/pci.o 00:09:57.116 CC lib/env_dpdk/init.o 00:09:57.116 CC lib/env_dpdk/threads.o 00:09:57.116 CC lib/env_dpdk/pci_ioat.o 00:09:57.116 CC lib/env_dpdk/pci_virtio.o 00:09:57.116 CC lib/env_dpdk/pci_vmd.o 00:09:57.116 CC lib/env_dpdk/pci_idxd.o 00:09:57.116 CC lib/env_dpdk/pci_event.o 00:09:57.116 CC lib/env_dpdk/sigbus_handler.o 00:09:57.116 CC lib/env_dpdk/pci_dpdk.o 00:09:57.116 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:57.116 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:57.375 LIB libspdk_conf.a 00:09:57.375 SO libspdk_conf.so.6.0 00:09:57.375 LIB libspdk_json.a 00:09:57.375 LIB libspdk_rdma.a 00:09:57.375 SYMLINK libspdk_conf.so 00:09:57.375 SO libspdk_json.so.6.0 00:09:57.375 SO libspdk_rdma.so.6.0 00:09:57.632 SYMLINK libspdk_json.so 00:09:57.632 SYMLINK libspdk_rdma.so 00:09:57.632 LIB libspdk_idxd.a 00:09:57.632 SO libspdk_idxd.so.12.0 00:09:57.632 LIB libspdk_vmd.a 00:09:57.890 SO libspdk_vmd.so.6.0 00:09:57.890 SYMLINK libspdk_idxd.so 00:09:57.890 SYMLINK libspdk_vmd.so 00:09:57.890 CC lib/jsonrpc/jsonrpc_server.o 00:09:57.890 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:57.890 CC lib/jsonrpc/jsonrpc_client.o 00:09:57.890 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:58.149 LIB libspdk_env_dpdk.a 00:09:58.149 LIB libspdk_jsonrpc.a 00:09:58.149 SO libspdk_jsonrpc.so.6.0 00:09:58.149 SO libspdk_env_dpdk.so.14.1 00:09:58.407 SYMLINK libspdk_jsonrpc.so 00:09:58.407 SYMLINK libspdk_env_dpdk.so 00:09:58.666 CC lib/rpc/rpc.o 00:09:58.924 LIB libspdk_rpc.a 00:09:58.924 SO libspdk_rpc.so.6.0 00:09:58.924 SYMLINK libspdk_rpc.so 00:09:59.490 CC lib/keyring/keyring.o 00:09:59.490 CC lib/keyring/keyring_rpc.o 00:09:59.490 CC lib/notify/notify.o 00:09:59.490 CC lib/notify/notify_rpc.o 00:09:59.490 CC lib/trace/trace.o 00:09:59.490 CC lib/trace/trace_flags.o 00:09:59.490 CC lib/trace/trace_rpc.o 00:09:59.490 LIB libspdk_notify.a 00:09:59.490 SO libspdk_notify.so.6.0 00:09:59.749 LIB libspdk_keyring.a 00:09:59.749 LIB libspdk_trace.a 00:09:59.749 SYMLINK libspdk_notify.so 00:09:59.749 SO libspdk_keyring.so.1.0 00:09:59.749 SO libspdk_trace.so.10.0 00:09:59.749 SYMLINK libspdk_keyring.so 00:09:59.749 SYMLINK libspdk_trace.so 00:10:00.317 CC lib/sock/sock.o 00:10:00.317 CC lib/sock/sock_rpc.o 00:10:00.317 CC lib/thread/thread.o 00:10:00.317 CC lib/thread/iobuf.o 00:10:00.576 LIB libspdk_sock.a 00:10:00.576 SO libspdk_sock.so.9.0 00:10:00.576 SYMLINK libspdk_sock.so 00:10:01.145 CC lib/nvme/nvme_ctrlr_cmd.o 00:10:01.145 CC lib/nvme/nvme_ctrlr.o 00:10:01.145 CC lib/nvme/nvme_fabric.o 00:10:01.145 CC lib/nvme/nvme_ns_cmd.o 00:10:01.145 CC lib/nvme/nvme_ns.o 00:10:01.145 CC lib/nvme/nvme_pcie_common.o 00:10:01.145 CC lib/nvme/nvme_pcie.o 00:10:01.145 CC lib/nvme/nvme_qpair.o 00:10:01.145 CC lib/nvme/nvme.o 00:10:01.145 CC lib/nvme/nvme_quirks.o 00:10:01.145 CC lib/nvme/nvme_transport.o 00:10:01.145 CC lib/nvme/nvme_discovery.o 00:10:01.145 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:10:01.145 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:10:01.145 CC lib/nvme/nvme_io_msg.o 00:10:01.145 CC lib/nvme/nvme_tcp.o 00:10:01.145 CC lib/nvme/nvme_opal.o 00:10:01.145 CC lib/nvme/nvme_poll_group.o 00:10:01.145 CC lib/nvme/nvme_zns.o 00:10:01.145 CC lib/nvme/nvme_stubs.o 00:10:01.145 CC lib/nvme/nvme_auth.o 00:10:01.145 CC lib/nvme/nvme_cuse.o 00:10:01.145 CC lib/nvme/nvme_vfio_user.o 00:10:01.145 CC lib/nvme/nvme_rdma.o 00:10:01.713 LIB libspdk_thread.a 00:10:01.713 SO libspdk_thread.so.10.0 00:10:01.713 SYMLINK libspdk_thread.so 00:10:01.973 CC lib/accel/accel.o 00:10:01.973 CC lib/accel/accel_rpc.o 00:10:01.973 CC lib/accel/accel_sw.o 00:10:01.973 CC lib/init/json_config.o 00:10:01.973 CC lib/init/subsystem.o 00:10:01.973 CC lib/init/subsystem_rpc.o 00:10:01.973 CC lib/init/rpc.o 00:10:01.973 CC lib/blob/blobstore.o 00:10:01.973 CC lib/blob/request.o 00:10:01.973 CC lib/blob/zeroes.o 00:10:01.973 CC lib/blob/blob_bs_dev.o 00:10:01.973 CC lib/virtio/virtio.o 00:10:01.973 CC lib/virtio/virtio_vhost_user.o 00:10:01.973 CC lib/virtio/virtio_vfio_user.o 00:10:01.973 CC lib/virtio/virtio_pci.o 00:10:01.973 CC lib/vfu_tgt/tgt_endpoint.o 00:10:01.973 CC lib/vfu_tgt/tgt_rpc.o 00:10:02.232 LIB libspdk_init.a 00:10:02.491 SO libspdk_init.so.5.0 00:10:02.491 LIB libspdk_virtio.a 00:10:02.491 LIB libspdk_vfu_tgt.a 00:10:02.491 SYMLINK libspdk_init.so 00:10:02.491 SO libspdk_vfu_tgt.so.3.0 00:10:02.491 SO libspdk_virtio.so.7.0 00:10:02.491 SYMLINK libspdk_vfu_tgt.so 00:10:02.491 SYMLINK libspdk_virtio.so 00:10:02.750 CC lib/event/app.o 00:10:02.750 CC lib/event/reactor.o 00:10:02.750 CC lib/event/log_rpc.o 00:10:02.750 CC lib/event/app_rpc.o 00:10:02.750 CC lib/event/scheduler_static.o 00:10:03.009 LIB libspdk_accel.a 00:10:03.010 SO libspdk_accel.so.15.0 00:10:03.010 LIB libspdk_nvme.a 00:10:03.010 SYMLINK libspdk_accel.so 00:10:03.268 LIB libspdk_event.a 00:10:03.268 SO libspdk_nvme.so.13.1 00:10:03.268 SO libspdk_event.so.13.1 00:10:03.268 SYMLINK libspdk_event.so 00:10:03.527 CC lib/bdev/bdev.o 00:10:03.527 CC lib/bdev/bdev_rpc.o 00:10:03.527 CC lib/bdev/bdev_zone.o 00:10:03.527 CC lib/bdev/part.o 00:10:03.527 CC lib/bdev/scsi_nvme.o 00:10:03.527 SYMLINK libspdk_nvme.so 00:10:04.905 LIB libspdk_blob.a 00:10:04.905 SO libspdk_blob.so.11.0 00:10:04.905 SYMLINK libspdk_blob.so 00:10:05.535 CC lib/blobfs/blobfs.o 00:10:05.535 CC lib/blobfs/tree.o 00:10:05.535 CC lib/lvol/lvol.o 00:10:05.816 LIB libspdk_bdev.a 00:10:06.075 SO libspdk_bdev.so.15.0 00:10:06.075 SYMLINK libspdk_bdev.so 00:10:06.075 LIB libspdk_blobfs.a 00:10:06.075 SO libspdk_blobfs.so.10.0 00:10:06.334 LIB libspdk_lvol.a 00:10:06.334 SYMLINK libspdk_blobfs.so 00:10:06.334 SO libspdk_lvol.so.10.0 00:10:06.334 SYMLINK libspdk_lvol.so 00:10:06.334 CC lib/ftl/ftl_init.o 00:10:06.334 CC lib/ftl/ftl_core.o 00:10:06.334 CC lib/nvmf/ctrlr.o 00:10:06.334 CC lib/nvmf/ctrlr_discovery.o 00:10:06.334 CC lib/ftl/ftl_layout.o 00:10:06.334 CC lib/nvmf/ctrlr_bdev.o 00:10:06.334 CC lib/ftl/ftl_debug.o 00:10:06.334 CC lib/ftl/ftl_io.o 00:10:06.334 CC lib/nvmf/subsystem.o 00:10:06.334 CC lib/ftl/ftl_sb.o 00:10:06.334 CC lib/nvmf/nvmf.o 00:10:06.334 CC lib/ftl/ftl_l2p.o 00:10:06.334 CC lib/nvmf/nvmf_rpc.o 00:10:06.334 CC lib/ftl/ftl_nv_cache.o 00:10:06.334 CC lib/ftl/ftl_l2p_flat.o 00:10:06.334 CC lib/nvmf/transport.o 00:10:06.334 CC lib/nvmf/tcp.o 00:10:06.334 CC lib/ftl/ftl_band.o 00:10:06.334 CC lib/ftl/ftl_band_ops.o 00:10:06.334 CC lib/nvmf/stubs.o 00:10:06.334 CC lib/nvmf/mdns_server.o 00:10:06.334 CC lib/ftl/ftl_reloc.o 00:10:06.334 CC lib/ftl/ftl_writer.o 00:10:06.334 CC lib/nbd/nbd_rpc.o 00:10:06.334 CC lib/nbd/nbd.o 00:10:06.334 CC lib/nvmf/vfio_user.o 00:10:06.334 CC lib/ftl/ftl_rq.o 00:10:06.334 CC lib/scsi/dev.o 00:10:06.334 CC lib/nvmf/rdma.o 00:10:06.334 CC lib/ftl/ftl_l2p_cache.o 00:10:06.334 CC lib/ftl/ftl_p2l.o 00:10:06.334 CC lib/scsi/lun.o 00:10:06.334 CC lib/nvmf/auth.o 00:10:06.334 CC lib/ftl/mngt/ftl_mngt.o 00:10:06.334 CC lib/ublk/ublk.o 00:10:06.334 CC lib/scsi/scsi.o 00:10:06.334 CC lib/ublk/ublk_rpc.o 00:10:06.334 CC lib/scsi/port.o 00:10:06.334 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:10:06.334 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:10:06.334 CC lib/scsi/scsi_bdev.o 00:10:06.334 CC lib/scsi/scsi_pr.o 00:10:06.334 CC lib/ftl/mngt/ftl_mngt_startup.o 00:10:06.334 CC lib/scsi/scsi_rpc.o 00:10:06.334 CC lib/ftl/mngt/ftl_mngt_md.o 00:10:06.334 CC lib/scsi/task.o 00:10:06.334 CC lib/ftl/mngt/ftl_mngt_misc.o 00:10:06.334 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:10:06.334 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:10:06.334 CC lib/ftl/mngt/ftl_mngt_band.o 00:10:06.334 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:10:06.334 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:10:06.334 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:10:06.334 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:10:06.334 CC lib/ftl/utils/ftl_conf.o 00:10:06.334 CC lib/ftl/utils/ftl_md.o 00:10:06.593 CC lib/ftl/utils/ftl_property.o 00:10:06.593 CC lib/ftl/utils/ftl_mempool.o 00:10:06.593 CC lib/ftl/utils/ftl_bitmap.o 00:10:06.593 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:10:06.593 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:10:06.593 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:10:06.593 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:10:06.593 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:10:06.593 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:10:06.593 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:10:06.593 CC lib/ftl/upgrade/ftl_sb_v3.o 00:10:06.593 CC lib/ftl/nvc/ftl_nvc_dev.o 00:10:06.593 CC lib/ftl/upgrade/ftl_sb_v5.o 00:10:06.593 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:10:06.593 CC lib/ftl/base/ftl_base_dev.o 00:10:06.593 CC lib/ftl/base/ftl_base_bdev.o 00:10:06.593 CC lib/ftl/ftl_trace.o 00:10:07.164 LIB libspdk_nbd.a 00:10:07.164 SO libspdk_nbd.so.7.0 00:10:07.164 SYMLINK libspdk_nbd.so 00:10:07.164 LIB libspdk_scsi.a 00:10:07.164 LIB libspdk_ublk.a 00:10:07.164 SO libspdk_scsi.so.9.0 00:10:07.164 SO libspdk_ublk.so.3.0 00:10:07.426 SYMLINK libspdk_ublk.so 00:10:07.426 SYMLINK libspdk_scsi.so 00:10:07.426 LIB libspdk_ftl.a 00:10:07.685 SO libspdk_ftl.so.9.0 00:10:07.685 CC lib/iscsi/conn.o 00:10:07.685 CC lib/iscsi/init_grp.o 00:10:07.685 CC lib/iscsi/iscsi.o 00:10:07.685 CC lib/iscsi/md5.o 00:10:07.685 CC lib/iscsi/param.o 00:10:07.685 CC lib/iscsi/portal_grp.o 00:10:07.685 CC lib/iscsi/tgt_node.o 00:10:07.685 CC lib/iscsi/iscsi_subsystem.o 00:10:07.685 CC lib/iscsi/iscsi_rpc.o 00:10:07.685 CC lib/iscsi/task.o 00:10:07.685 CC lib/vhost/vhost.o 00:10:07.685 CC lib/vhost/vhost_rpc.o 00:10:07.685 CC lib/vhost/vhost_scsi.o 00:10:07.685 CC lib/vhost/vhost_blk.o 00:10:07.685 CC lib/vhost/rte_vhost_user.o 00:10:07.943 SYMLINK libspdk_ftl.so 00:10:08.510 LIB libspdk_nvmf.a 00:10:08.510 SO libspdk_nvmf.so.18.1 00:10:08.769 LIB libspdk_vhost.a 00:10:08.769 SO libspdk_vhost.so.8.0 00:10:08.769 SYMLINK libspdk_nvmf.so 00:10:09.028 SYMLINK libspdk_vhost.so 00:10:09.028 LIB libspdk_iscsi.a 00:10:09.028 SO libspdk_iscsi.so.8.0 00:10:09.287 SYMLINK libspdk_iscsi.so 00:10:09.855 CC module/env_dpdk/env_dpdk_rpc.o 00:10:09.855 CC module/vfu_device/vfu_virtio.o 00:10:09.855 CC module/vfu_device/vfu_virtio_blk.o 00:10:09.855 CC module/vfu_device/vfu_virtio_scsi.o 00:10:09.855 CC module/vfu_device/vfu_virtio_rpc.o 00:10:09.855 CC module/blob/bdev/blob_bdev.o 00:10:09.855 LIB libspdk_env_dpdk_rpc.a 00:10:10.114 CC module/accel/ioat/accel_ioat.o 00:10:10.114 CC module/accel/ioat/accel_ioat_rpc.o 00:10:10.114 CC module/keyring/file/keyring.o 00:10:10.114 CC module/keyring/file/keyring_rpc.o 00:10:10.114 CC module/accel/dsa/accel_dsa.o 00:10:10.114 CC module/accel/dsa/accel_dsa_rpc.o 00:10:10.114 CC module/scheduler/dynamic/scheduler_dynamic.o 00:10:10.114 CC module/sock/posix/posix.o 00:10:10.114 CC module/accel/iaa/accel_iaa_rpc.o 00:10:10.114 CC module/accel/iaa/accel_iaa.o 00:10:10.114 CC module/accel/error/accel_error.o 00:10:10.114 CC module/accel/error/accel_error_rpc.o 00:10:10.114 CC module/scheduler/gscheduler/gscheduler.o 00:10:10.114 CC module/keyring/linux/keyring.o 00:10:10.114 CC module/keyring/linux/keyring_rpc.o 00:10:10.114 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:10:10.114 SO libspdk_env_dpdk_rpc.so.6.0 00:10:10.114 SYMLINK libspdk_env_dpdk_rpc.so 00:10:10.114 LIB libspdk_keyring_file.a 00:10:10.114 LIB libspdk_keyring_linux.a 00:10:10.114 LIB libspdk_scheduler_gscheduler.a 00:10:10.114 LIB libspdk_scheduler_dpdk_governor.a 00:10:10.114 LIB libspdk_accel_ioat.a 00:10:10.114 SO libspdk_keyring_linux.so.1.0 00:10:10.114 SO libspdk_keyring_file.so.1.0 00:10:10.114 LIB libspdk_accel_error.a 00:10:10.114 LIB libspdk_scheduler_dynamic.a 00:10:10.114 LIB libspdk_accel_iaa.a 00:10:10.373 SO libspdk_scheduler_gscheduler.so.4.0 00:10:10.373 LIB libspdk_blob_bdev.a 00:10:10.373 SO libspdk_scheduler_dpdk_governor.so.4.0 00:10:10.373 SO libspdk_accel_ioat.so.6.0 00:10:10.373 SO libspdk_accel_error.so.2.0 00:10:10.373 SO libspdk_scheduler_dynamic.so.4.0 00:10:10.373 LIB libspdk_accel_dsa.a 00:10:10.373 SO libspdk_accel_iaa.so.3.0 00:10:10.373 SYMLINK libspdk_keyring_linux.so 00:10:10.373 SYMLINK libspdk_keyring_file.so 00:10:10.373 SO libspdk_blob_bdev.so.11.0 00:10:10.373 SYMLINK libspdk_scheduler_gscheduler.so 00:10:10.373 SO libspdk_accel_dsa.so.5.0 00:10:10.373 SYMLINK libspdk_scheduler_dpdk_governor.so 00:10:10.373 SYMLINK libspdk_scheduler_dynamic.so 00:10:10.373 SYMLINK libspdk_accel_ioat.so 00:10:10.373 SYMLINK libspdk_accel_error.so 00:10:10.373 SYMLINK libspdk_accel_iaa.so 00:10:10.373 SYMLINK libspdk_blob_bdev.so 00:10:10.373 SYMLINK libspdk_accel_dsa.so 00:10:10.373 LIB libspdk_vfu_device.a 00:10:10.373 SO libspdk_vfu_device.so.3.0 00:10:10.633 SYMLINK libspdk_vfu_device.so 00:10:10.633 LIB libspdk_sock_posix.a 00:10:10.892 SO libspdk_sock_posix.so.6.0 00:10:10.892 SYMLINK libspdk_sock_posix.so 00:10:10.892 CC module/blobfs/bdev/blobfs_bdev.o 00:10:10.892 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:10:10.892 CC module/bdev/lvol/vbdev_lvol.o 00:10:10.892 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:10:10.892 CC module/bdev/error/vbdev_error.o 00:10:10.892 CC module/bdev/error/vbdev_error_rpc.o 00:10:10.892 CC module/bdev/split/vbdev_split.o 00:10:10.892 CC module/bdev/split/vbdev_split_rpc.o 00:10:10.892 CC module/bdev/gpt/gpt.o 00:10:10.892 CC module/bdev/gpt/vbdev_gpt.o 00:10:10.892 CC module/bdev/delay/vbdev_delay.o 00:10:10.892 CC module/bdev/null/bdev_null.o 00:10:10.892 CC module/bdev/delay/vbdev_delay_rpc.o 00:10:10.892 CC module/bdev/malloc/bdev_malloc_rpc.o 00:10:10.892 CC module/bdev/malloc/bdev_malloc.o 00:10:10.892 CC module/bdev/null/bdev_null_rpc.o 00:10:10.892 CC module/bdev/raid/bdev_raid.o 00:10:10.892 CC module/bdev/raid/bdev_raid_rpc.o 00:10:10.892 CC module/bdev/raid/raid0.o 00:10:10.892 CC module/bdev/raid/bdev_raid_sb.o 00:10:10.892 CC module/bdev/aio/bdev_aio_rpc.o 00:10:10.892 CC module/bdev/raid/raid1.o 00:10:10.892 CC module/bdev/aio/bdev_aio.o 00:10:10.892 CC module/bdev/raid/concat.o 00:10:10.892 CC module/bdev/ftl/bdev_ftl_rpc.o 00:10:10.892 CC module/bdev/passthru/vbdev_passthru.o 00:10:10.892 CC module/bdev/ftl/bdev_ftl.o 00:10:10.892 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:10:10.892 CC module/bdev/zone_block/vbdev_zone_block.o 00:10:10.892 CC module/bdev/nvme/bdev_nvme.o 00:10:10.892 CC module/bdev/nvme/nvme_rpc.o 00:10:10.892 CC module/bdev/nvme/bdev_nvme_rpc.o 00:10:10.892 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:10:10.892 CC module/bdev/virtio/bdev_virtio_scsi.o 00:10:10.892 CC module/bdev/virtio/bdev_virtio_blk.o 00:10:10.892 CC module/bdev/nvme/vbdev_opal.o 00:10:10.892 CC module/bdev/nvme/bdev_mdns_client.o 00:10:10.892 CC module/bdev/virtio/bdev_virtio_rpc.o 00:10:10.892 CC module/bdev/nvme/vbdev_opal_rpc.o 00:10:10.892 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:10:10.892 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:10:10.892 CC module/bdev/iscsi/bdev_iscsi.o 00:10:11.151 LIB libspdk_blobfs_bdev.a 00:10:11.151 SO libspdk_blobfs_bdev.so.6.0 00:10:11.151 LIB libspdk_bdev_split.a 00:10:11.151 LIB libspdk_bdev_error.a 00:10:11.151 SO libspdk_bdev_split.so.6.0 00:10:11.151 LIB libspdk_bdev_gpt.a 00:10:11.411 LIB libspdk_bdev_null.a 00:10:11.411 SYMLINK libspdk_blobfs_bdev.so 00:10:11.411 LIB libspdk_bdev_aio.a 00:10:11.411 SO libspdk_bdev_error.so.6.0 00:10:11.411 SO libspdk_bdev_gpt.so.6.0 00:10:11.411 SO libspdk_bdev_aio.so.6.0 00:10:11.411 SO libspdk_bdev_null.so.6.0 00:10:11.411 LIB libspdk_bdev_ftl.a 00:10:11.411 LIB libspdk_bdev_passthru.a 00:10:11.411 LIB libspdk_bdev_zone_block.a 00:10:11.411 SYMLINK libspdk_bdev_split.so 00:10:11.411 SYMLINK libspdk_bdev_error.so 00:10:11.411 LIB libspdk_bdev_malloc.a 00:10:11.411 SO libspdk_bdev_ftl.so.6.0 00:10:11.411 LIB libspdk_bdev_delay.a 00:10:11.411 SO libspdk_bdev_passthru.so.6.0 00:10:11.411 SO libspdk_bdev_zone_block.so.6.0 00:10:11.411 SYMLINK libspdk_bdev_gpt.so 00:10:11.411 LIB libspdk_bdev_iscsi.a 00:10:11.411 SYMLINK libspdk_bdev_null.so 00:10:11.411 SYMLINK libspdk_bdev_aio.so 00:10:11.411 SO libspdk_bdev_malloc.so.6.0 00:10:11.411 SO libspdk_bdev_delay.so.6.0 00:10:11.411 SO libspdk_bdev_iscsi.so.6.0 00:10:11.411 SYMLINK libspdk_bdev_ftl.so 00:10:11.411 SYMLINK libspdk_bdev_passthru.so 00:10:11.411 SYMLINK libspdk_bdev_zone_block.so 00:10:11.411 LIB libspdk_bdev_lvol.a 00:10:11.411 SYMLINK libspdk_bdev_malloc.so 00:10:11.411 SYMLINK libspdk_bdev_delay.so 00:10:11.411 SO libspdk_bdev_lvol.so.6.0 00:10:11.411 SYMLINK libspdk_bdev_iscsi.so 00:10:11.669 LIB libspdk_bdev_virtio.a 00:10:11.669 SO libspdk_bdev_virtio.so.6.0 00:10:11.669 SYMLINK libspdk_bdev_lvol.so 00:10:11.669 SYMLINK libspdk_bdev_virtio.so 00:10:11.929 LIB libspdk_bdev_raid.a 00:10:11.929 SO libspdk_bdev_raid.so.6.0 00:10:12.188 SYMLINK libspdk_bdev_raid.so 00:10:12.756 LIB libspdk_bdev_nvme.a 00:10:12.756 SO libspdk_bdev_nvme.so.7.0 00:10:12.756 SYMLINK libspdk_bdev_nvme.so 00:10:13.691 CC module/event/subsystems/sock/sock.o 00:10:13.691 CC module/event/subsystems/vmd/vmd.o 00:10:13.691 CC module/event/subsystems/vmd/vmd_rpc.o 00:10:13.691 CC module/event/subsystems/keyring/keyring.o 00:10:13.691 CC module/event/subsystems/scheduler/scheduler.o 00:10:13.691 CC module/event/subsystems/iobuf/iobuf.o 00:10:13.691 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:10:13.691 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:10:13.691 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:10:13.691 LIB libspdk_event_sock.a 00:10:13.691 LIB libspdk_event_keyring.a 00:10:13.691 LIB libspdk_event_vmd.a 00:10:13.691 LIB libspdk_event_scheduler.a 00:10:13.691 LIB libspdk_event_vhost_blk.a 00:10:13.691 LIB libspdk_event_vfu_tgt.a 00:10:13.691 SO libspdk_event_sock.so.5.0 00:10:13.691 LIB libspdk_event_iobuf.a 00:10:13.691 SO libspdk_event_keyring.so.1.0 00:10:13.691 SO libspdk_event_vmd.so.6.0 00:10:13.691 SO libspdk_event_vfu_tgt.so.3.0 00:10:13.691 SO libspdk_event_scheduler.so.4.0 00:10:13.691 SO libspdk_event_iobuf.so.3.0 00:10:13.691 SO libspdk_event_vhost_blk.so.3.0 00:10:13.951 SYMLINK libspdk_event_sock.so 00:10:13.951 SYMLINK libspdk_event_keyring.so 00:10:13.951 SYMLINK libspdk_event_vfu_tgt.so 00:10:13.951 SYMLINK libspdk_event_vmd.so 00:10:13.951 SYMLINK libspdk_event_scheduler.so 00:10:13.951 SYMLINK libspdk_event_vhost_blk.so 00:10:13.951 SYMLINK libspdk_event_iobuf.so 00:10:14.209 CC module/event/subsystems/accel/accel.o 00:10:14.469 LIB libspdk_event_accel.a 00:10:14.469 SO libspdk_event_accel.so.6.0 00:10:14.469 SYMLINK libspdk_event_accel.so 00:10:15.036 CC module/event/subsystems/bdev/bdev.o 00:10:15.036 LIB libspdk_event_bdev.a 00:10:15.036 SO libspdk_event_bdev.so.6.0 00:10:15.295 SYMLINK libspdk_event_bdev.so 00:10:15.554 CC module/event/subsystems/ublk/ublk.o 00:10:15.554 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:10:15.554 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:10:15.554 CC module/event/subsystems/scsi/scsi.o 00:10:15.554 CC module/event/subsystems/nbd/nbd.o 00:10:15.813 LIB libspdk_event_ublk.a 00:10:15.813 LIB libspdk_event_nbd.a 00:10:15.813 LIB libspdk_event_scsi.a 00:10:15.813 SO libspdk_event_ublk.so.3.0 00:10:15.813 SO libspdk_event_nbd.so.6.0 00:10:15.813 SO libspdk_event_scsi.so.6.0 00:10:15.813 LIB libspdk_event_nvmf.a 00:10:15.813 SYMLINK libspdk_event_ublk.so 00:10:15.813 SYMLINK libspdk_event_nbd.so 00:10:15.813 SO libspdk_event_nvmf.so.6.0 00:10:15.813 SYMLINK libspdk_event_scsi.so 00:10:16.072 SYMLINK libspdk_event_nvmf.so 00:10:16.330 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:10:16.330 CC module/event/subsystems/iscsi/iscsi.o 00:10:16.330 LIB libspdk_event_vhost_scsi.a 00:10:16.330 LIB libspdk_event_iscsi.a 00:10:16.330 SO libspdk_event_vhost_scsi.so.3.0 00:10:16.589 SO libspdk_event_iscsi.so.6.0 00:10:16.589 SYMLINK libspdk_event_vhost_scsi.so 00:10:16.589 SYMLINK libspdk_event_iscsi.so 00:10:16.848 SO libspdk.so.6.0 00:10:16.848 SYMLINK libspdk.so 00:10:17.108 CXX app/trace/trace.o 00:10:17.108 CC app/spdk_nvme_discover/discovery_aer.o 00:10:17.108 CC app/trace_record/trace_record.o 00:10:17.108 CC test/rpc_client/rpc_client_test.o 00:10:17.108 TEST_HEADER include/spdk/accel.h 00:10:17.108 TEST_HEADER include/spdk/accel_module.h 00:10:17.108 TEST_HEADER include/spdk/assert.h 00:10:17.108 TEST_HEADER include/spdk/barrier.h 00:10:17.108 CC app/spdk_lspci/spdk_lspci.o 00:10:17.108 TEST_HEADER include/spdk/base64.h 00:10:17.108 TEST_HEADER include/spdk/bdev_module.h 00:10:17.108 CC app/spdk_nvme_identify/identify.o 00:10:17.108 TEST_HEADER include/spdk/bdev.h 00:10:17.108 CC app/spdk_top/spdk_top.o 00:10:17.108 TEST_HEADER include/spdk/bdev_zone.h 00:10:17.108 TEST_HEADER include/spdk/bit_array.h 00:10:17.108 TEST_HEADER include/spdk/blob_bdev.h 00:10:17.108 CC app/spdk_nvme_perf/perf.o 00:10:17.108 TEST_HEADER include/spdk/blobfs_bdev.h 00:10:17.108 TEST_HEADER include/spdk/bit_pool.h 00:10:17.108 TEST_HEADER include/spdk/blobfs.h 00:10:17.108 TEST_HEADER include/spdk/blob.h 00:10:17.108 TEST_HEADER include/spdk/conf.h 00:10:17.108 TEST_HEADER include/spdk/config.h 00:10:17.108 TEST_HEADER include/spdk/cpuset.h 00:10:17.108 TEST_HEADER include/spdk/crc16.h 00:10:17.108 TEST_HEADER include/spdk/crc32.h 00:10:17.108 CC examples/interrupt_tgt/interrupt_tgt.o 00:10:17.108 TEST_HEADER include/spdk/crc64.h 00:10:17.108 TEST_HEADER include/spdk/dma.h 00:10:17.108 TEST_HEADER include/spdk/endian.h 00:10:17.108 TEST_HEADER include/spdk/dif.h 00:10:17.108 TEST_HEADER include/spdk/env_dpdk.h 00:10:17.108 TEST_HEADER include/spdk/env.h 00:10:17.108 TEST_HEADER include/spdk/event.h 00:10:17.108 TEST_HEADER include/spdk/fd_group.h 00:10:17.108 TEST_HEADER include/spdk/fd.h 00:10:17.108 TEST_HEADER include/spdk/file.h 00:10:17.108 TEST_HEADER include/spdk/ftl.h 00:10:17.108 TEST_HEADER include/spdk/gpt_spec.h 00:10:17.108 TEST_HEADER include/spdk/hexlify.h 00:10:17.108 TEST_HEADER include/spdk/histogram_data.h 00:10:17.108 TEST_HEADER include/spdk/idxd.h 00:10:17.108 TEST_HEADER include/spdk/init.h 00:10:17.108 TEST_HEADER include/spdk/idxd_spec.h 00:10:17.108 TEST_HEADER include/spdk/ioat.h 00:10:17.108 TEST_HEADER include/spdk/ioat_spec.h 00:10:17.108 TEST_HEADER include/spdk/iscsi_spec.h 00:10:17.108 TEST_HEADER include/spdk/jsonrpc.h 00:10:17.108 TEST_HEADER include/spdk/json.h 00:10:17.108 TEST_HEADER include/spdk/keyring.h 00:10:17.108 TEST_HEADER include/spdk/keyring_module.h 00:10:17.108 CC app/nvmf_tgt/nvmf_main.o 00:10:17.108 CC app/iscsi_tgt/iscsi_tgt.o 00:10:17.108 TEST_HEADER include/spdk/likely.h 00:10:17.376 TEST_HEADER include/spdk/log.h 00:10:17.376 TEST_HEADER include/spdk/memory.h 00:10:17.376 TEST_HEADER include/spdk/lvol.h 00:10:17.376 TEST_HEADER include/spdk/mmio.h 00:10:17.376 TEST_HEADER include/spdk/nbd.h 00:10:17.376 TEST_HEADER include/spdk/notify.h 00:10:17.376 TEST_HEADER include/spdk/nvme.h 00:10:17.376 TEST_HEADER include/spdk/nvme_intel.h 00:10:17.376 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:10:17.377 TEST_HEADER include/spdk/nvme_ocssd.h 00:10:17.377 CC app/vhost/vhost.o 00:10:17.377 TEST_HEADER include/spdk/nvme_spec.h 00:10:17.377 TEST_HEADER include/spdk/nvme_zns.h 00:10:17.377 TEST_HEADER include/spdk/nvmf_cmd.h 00:10:17.377 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:10:17.377 TEST_HEADER include/spdk/nvmf.h 00:10:17.377 TEST_HEADER include/spdk/nvmf_spec.h 00:10:17.377 TEST_HEADER include/spdk/nvmf_transport.h 00:10:17.377 TEST_HEADER include/spdk/opal.h 00:10:17.377 TEST_HEADER include/spdk/opal_spec.h 00:10:17.377 TEST_HEADER include/spdk/pci_ids.h 00:10:17.377 TEST_HEADER include/spdk/pipe.h 00:10:17.377 TEST_HEADER include/spdk/queue.h 00:10:17.377 TEST_HEADER include/spdk/reduce.h 00:10:17.377 TEST_HEADER include/spdk/rpc.h 00:10:17.377 CC app/spdk_dd/spdk_dd.o 00:10:17.377 TEST_HEADER include/spdk/scheduler.h 00:10:17.377 TEST_HEADER include/spdk/scsi.h 00:10:17.377 TEST_HEADER include/spdk/scsi_spec.h 00:10:17.377 TEST_HEADER include/spdk/sock.h 00:10:17.377 TEST_HEADER include/spdk/stdinc.h 00:10:17.377 TEST_HEADER include/spdk/string.h 00:10:17.377 TEST_HEADER include/spdk/thread.h 00:10:17.377 TEST_HEADER include/spdk/trace.h 00:10:17.377 TEST_HEADER include/spdk/trace_parser.h 00:10:17.377 TEST_HEADER include/spdk/tree.h 00:10:17.377 TEST_HEADER include/spdk/util.h 00:10:17.377 TEST_HEADER include/spdk/ublk.h 00:10:17.377 TEST_HEADER include/spdk/uuid.h 00:10:17.377 TEST_HEADER include/spdk/vfio_user_pci.h 00:10:17.377 TEST_HEADER include/spdk/version.h 00:10:17.377 CC app/spdk_tgt/spdk_tgt.o 00:10:17.377 TEST_HEADER include/spdk/vfio_user_spec.h 00:10:17.377 TEST_HEADER include/spdk/vhost.h 00:10:17.377 TEST_HEADER include/spdk/vmd.h 00:10:17.377 TEST_HEADER include/spdk/xor.h 00:10:17.377 TEST_HEADER include/spdk/zipf.h 00:10:17.377 CXX test/cpp_headers/accel.o 00:10:17.377 CXX test/cpp_headers/assert.o 00:10:17.377 CXX test/cpp_headers/accel_module.o 00:10:17.377 CXX test/cpp_headers/barrier.o 00:10:17.377 CXX test/cpp_headers/base64.o 00:10:17.377 CXX test/cpp_headers/bdev.o 00:10:17.377 CXX test/cpp_headers/bdev_module.o 00:10:17.377 CXX test/cpp_headers/bdev_zone.o 00:10:17.377 CXX test/cpp_headers/bit_array.o 00:10:17.377 CXX test/cpp_headers/blob_bdev.o 00:10:17.377 CXX test/cpp_headers/bit_pool.o 00:10:17.377 CXX test/cpp_headers/blobfs.o 00:10:17.377 CXX test/cpp_headers/blobfs_bdev.o 00:10:17.377 CXX test/cpp_headers/blob.o 00:10:17.377 CXX test/cpp_headers/conf.o 00:10:17.377 CXX test/cpp_headers/config.o 00:10:17.377 CXX test/cpp_headers/cpuset.o 00:10:17.377 CXX test/cpp_headers/crc32.o 00:10:17.377 CXX test/cpp_headers/crc16.o 00:10:17.377 CXX test/cpp_headers/crc64.o 00:10:17.377 CXX test/cpp_headers/dif.o 00:10:17.377 CXX test/cpp_headers/dma.o 00:10:17.377 CXX test/cpp_headers/endian.o 00:10:17.377 CXX test/cpp_headers/env.o 00:10:17.377 CXX test/cpp_headers/env_dpdk.o 00:10:17.377 CXX test/cpp_headers/fd_group.o 00:10:17.377 CXX test/cpp_headers/event.o 00:10:17.377 CXX test/cpp_headers/fd.o 00:10:17.377 CXX test/cpp_headers/file.o 00:10:17.377 CXX test/cpp_headers/ftl.o 00:10:17.377 CXX test/cpp_headers/gpt_spec.o 00:10:17.377 CXX test/cpp_headers/hexlify.o 00:10:17.377 CXX test/cpp_headers/histogram_data.o 00:10:17.377 CXX test/cpp_headers/idxd.o 00:10:17.377 CXX test/cpp_headers/idxd_spec.o 00:10:17.377 CXX test/cpp_headers/init.o 00:10:17.377 CXX test/cpp_headers/ioat.o 00:10:17.377 CC examples/ioat/verify/verify.o 00:10:17.377 CXX test/cpp_headers/ioat_spec.o 00:10:17.377 CC examples/accel/perf/accel_perf.o 00:10:17.377 CC examples/ioat/perf/perf.o 00:10:17.377 CC test/app/histogram_perf/histogram_perf.o 00:10:17.643 CC examples/idxd/perf/perf.o 00:10:17.643 CC examples/nvme/hotplug/hotplug.o 00:10:17.643 CC test/env/memory/memory_ut.o 00:10:17.643 CC test/app/jsoncat/jsoncat.o 00:10:17.643 CC examples/nvme/nvme_manage/nvme_manage.o 00:10:17.643 CC test/app/stub/stub.o 00:10:17.643 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:10:17.643 CC examples/nvme/abort/abort.o 00:10:17.643 CC examples/vmd/led/led.o 00:10:17.643 CC examples/nvme/arbitration/arbitration.o 00:10:17.643 CC test/thread/poller_perf/poller_perf.o 00:10:17.643 CC test/env/vtophys/vtophys.o 00:10:17.643 CC examples/vmd/lsvmd/lsvmd.o 00:10:17.643 CC test/env/pci/pci_ut.o 00:10:17.643 CC app/fio/nvme/fio_plugin.o 00:10:17.643 CC examples/nvme/reconnect/reconnect.o 00:10:17.643 CC examples/nvme/cmb_copy/cmb_copy.o 00:10:17.643 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:10:17.643 CC examples/util/zipf/zipf.o 00:10:17.643 CC examples/nvme/hello_world/hello_world.o 00:10:17.643 CC test/nvme/e2edp/nvme_dp.o 00:10:17.643 CC test/event/reactor_perf/reactor_perf.o 00:10:17.643 CC test/event/event_perf/event_perf.o 00:10:17.643 CC test/app/bdev_svc/bdev_svc.o 00:10:17.643 CC test/nvme/reset/reset.o 00:10:17.643 CC test/accel/dif/dif.o 00:10:17.643 CC test/nvme/connect_stress/connect_stress.o 00:10:17.643 CC test/nvme/overhead/overhead.o 00:10:17.643 CC examples/sock/hello_world/hello_sock.o 00:10:17.643 CC test/nvme/aer/aer.o 00:10:17.643 CC test/event/reactor/reactor.o 00:10:17.643 CC test/nvme/boot_partition/boot_partition.o 00:10:17.643 CC test/nvme/cuse/cuse.o 00:10:17.643 CC test/dma/test_dma/test_dma.o 00:10:17.643 CC test/nvme/simple_copy/simple_copy.o 00:10:17.643 CC test/nvme/reserve/reserve.o 00:10:17.643 CC test/nvme/startup/startup.o 00:10:17.643 CC test/nvme/fused_ordering/fused_ordering.o 00:10:17.643 CC test/event/app_repeat/app_repeat.o 00:10:17.644 CC test/nvme/err_injection/err_injection.o 00:10:17.644 CC test/nvme/doorbell_aers/doorbell_aers.o 00:10:17.644 CC examples/bdev/bdevperf/bdevperf.o 00:10:17.644 CC test/nvme/fdp/fdp.o 00:10:17.644 CC test/nvme/sgl/sgl.o 00:10:17.644 CC test/nvme/compliance/nvme_compliance.o 00:10:17.644 CC examples/blob/cli/blobcli.o 00:10:17.644 CC examples/thread/thread/thread_ex.o 00:10:17.644 CC examples/blob/hello_world/hello_blob.o 00:10:17.644 CC test/blobfs/mkfs/mkfs.o 00:10:17.644 CC examples/nvmf/nvmf/nvmf.o 00:10:17.644 CC app/fio/bdev/fio_plugin.o 00:10:17.644 CC examples/bdev/hello_world/hello_bdev.o 00:10:17.644 CC test/bdev/bdevio/bdevio.o 00:10:17.644 CC test/event/scheduler/scheduler.o 00:10:17.644 LINK spdk_lspci 00:10:17.919 LINK interrupt_tgt 00:10:17.919 LINK spdk_nvme_discover 00:10:17.919 LINK rpc_client_test 00:10:17.919 LINK nvmf_tgt 00:10:18.184 CC test/env/mem_callbacks/mem_callbacks.o 00:10:18.185 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:10:18.185 LINK iscsi_tgt 00:10:18.185 LINK vhost 00:10:18.185 LINK spdk_trace_record 00:10:18.185 LINK histogram_perf 00:10:18.185 LINK lsvmd 00:10:18.185 CC test/lvol/esnap/esnap.o 00:10:18.185 LINK vtophys 00:10:18.185 CXX test/cpp_headers/iscsi_spec.o 00:10:18.185 LINK led 00:10:18.185 LINK event_perf 00:10:18.185 LINK poller_perf 00:10:18.185 LINK zipf 00:10:18.185 LINK reactor_perf 00:10:18.185 LINK jsoncat 00:10:18.185 CXX test/cpp_headers/json.o 00:10:18.185 CXX test/cpp_headers/jsonrpc.o 00:10:18.185 LINK reactor 00:10:18.185 CXX test/cpp_headers/keyring_module.o 00:10:18.185 CXX test/cpp_headers/keyring.o 00:10:18.185 CXX test/cpp_headers/likely.o 00:10:18.185 CXX test/cpp_headers/log.o 00:10:18.185 CXX test/cpp_headers/lvol.o 00:10:18.185 CXX test/cpp_headers/memory.o 00:10:18.185 LINK bdev_svc 00:10:18.185 LINK pmr_persistence 00:10:18.185 LINK ioat_perf 00:10:18.185 LINK stub 00:10:18.185 LINK env_dpdk_post_init 00:10:18.185 CXX test/cpp_headers/mmio.o 00:10:18.185 CXX test/cpp_headers/nbd.o 00:10:18.185 CXX test/cpp_headers/notify.o 00:10:18.185 LINK spdk_tgt 00:10:18.185 CXX test/cpp_headers/nvme.o 00:10:18.185 LINK app_repeat 00:10:18.185 CXX test/cpp_headers/nvme_intel.o 00:10:18.185 CXX test/cpp_headers/nvme_ocssd_spec.o 00:10:18.185 CXX test/cpp_headers/nvme_ocssd.o 00:10:18.185 CXX test/cpp_headers/nvme_spec.o 00:10:18.185 CXX test/cpp_headers/nvme_zns.o 00:10:18.185 CXX test/cpp_headers/nvmf_cmd.o 00:10:18.185 CXX test/cpp_headers/nvmf_fc_spec.o 00:10:18.185 CXX test/cpp_headers/nvmf.o 00:10:18.185 LINK boot_partition 00:10:18.185 CXX test/cpp_headers/nvmf_transport.o 00:10:18.185 CXX test/cpp_headers/opal.o 00:10:18.185 CXX test/cpp_headers/nvmf_spec.o 00:10:18.185 CXX test/cpp_headers/opal_spec.o 00:10:18.185 CXX test/cpp_headers/pipe.o 00:10:18.185 CXX test/cpp_headers/pci_ids.o 00:10:18.185 CXX test/cpp_headers/queue.o 00:10:18.185 LINK startup 00:10:18.185 CXX test/cpp_headers/reduce.o 00:10:18.185 LINK cmb_copy 00:10:18.185 LINK hotplug 00:10:18.185 LINK doorbell_aers 00:10:18.185 LINK connect_stress 00:10:18.185 CXX test/cpp_headers/rpc.o 00:10:18.185 CXX test/cpp_headers/scheduler.o 00:10:18.185 CXX test/cpp_headers/scsi.o 00:10:18.448 CXX test/cpp_headers/scsi_spec.o 00:10:18.448 LINK fused_ordering 00:10:18.448 CXX test/cpp_headers/sock.o 00:10:18.448 CXX test/cpp_headers/stdinc.o 00:10:18.448 LINK hello_sock 00:10:18.448 LINK verify 00:10:18.448 CXX test/cpp_headers/string.o 00:10:18.449 LINK err_injection 00:10:18.449 LINK reserve 00:10:18.449 CXX test/cpp_headers/thread.o 00:10:18.449 CXX test/cpp_headers/trace.o 00:10:18.449 LINK simple_copy 00:10:18.449 LINK mkfs 00:10:18.449 CXX test/cpp_headers/trace_parser.o 00:10:18.449 LINK hello_world 00:10:18.449 LINK reset 00:10:18.449 LINK spdk_dd 00:10:18.449 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:10:18.449 CXX test/cpp_headers/tree.o 00:10:18.449 LINK overhead 00:10:18.449 CXX test/cpp_headers/ublk.o 00:10:18.449 LINK hello_blob 00:10:18.449 LINK thread 00:10:18.449 LINK sgl 00:10:18.449 LINK nvme_dp 00:10:18.449 LINK scheduler 00:10:18.449 LINK hello_bdev 00:10:18.449 CXX test/cpp_headers/util.o 00:10:18.449 CXX test/cpp_headers/uuid.o 00:10:18.449 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:10:18.449 LINK reconnect 00:10:18.449 LINK aer 00:10:18.449 LINK idxd_perf 00:10:18.710 LINK spdk_trace 00:10:18.710 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:10:18.710 LINK nvmf 00:10:18.710 LINK arbitration 00:10:18.710 CXX test/cpp_headers/version.o 00:10:18.710 LINK nvme_compliance 00:10:18.710 CXX test/cpp_headers/vfio_user_pci.o 00:10:18.710 CXX test/cpp_headers/vfio_user_spec.o 00:10:18.710 CXX test/cpp_headers/vmd.o 00:10:18.710 CXX test/cpp_headers/vhost.o 00:10:18.710 CXX test/cpp_headers/xor.o 00:10:18.710 CXX test/cpp_headers/zipf.o 00:10:18.710 LINK accel_perf 00:10:18.710 LINK fdp 00:10:18.710 LINK test_dma 00:10:18.710 LINK abort 00:10:18.710 LINK pci_ut 00:10:18.710 LINK bdevio 00:10:18.967 LINK dif 00:10:18.967 LINK nvme_manage 00:10:18.967 LINK blobcli 00:10:18.967 LINK spdk_bdev 00:10:18.967 LINK spdk_nvme 00:10:18.967 LINK spdk_nvme_perf 00:10:19.226 LINK nvme_fuzz 00:10:19.226 LINK spdk_top 00:10:19.226 LINK bdevperf 00:10:19.226 LINK spdk_nvme_identify 00:10:19.226 LINK mem_callbacks 00:10:19.226 LINK vhost_fuzz 00:10:19.484 LINK memory_ut 00:10:19.741 LINK cuse 00:10:20.309 LINK iscsi_fuzz 00:10:23.596 LINK esnap 00:10:23.596 00:10:23.596 real 0m55.274s 00:10:23.596 user 7m56.037s 00:10:23.596 sys 5m4.055s 00:10:23.597 11:18:48 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:10:23.597 11:18:48 make -- common/autotest_common.sh@10 -- $ set +x 00:10:23.597 ************************************ 00:10:23.597 END TEST make 00:10:23.597 ************************************ 00:10:23.856 11:18:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:10:23.856 11:18:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:23.856 11:18:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:23.856 11:18:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:23.856 11:18:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:10:23.856 11:18:48 -- pm/common@44 -- $ pid=3642180 00:10:23.856 11:18:48 -- pm/common@50 -- $ kill -TERM 3642180 00:10:23.856 11:18:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:23.856 11:18:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:10:23.856 11:18:48 -- pm/common@44 -- $ pid=3642181 00:10:23.856 11:18:48 -- pm/common@50 -- $ kill -TERM 3642181 00:10:23.856 11:18:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:23.856 11:18:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:10:23.856 11:18:48 -- pm/common@44 -- $ pid=3642183 00:10:23.856 11:18:48 -- pm/common@50 -- $ kill -TERM 3642183 00:10:23.856 11:18:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:23.856 11:18:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:10:23.856 11:18:48 -- pm/common@44 -- $ pid=3642208 00:10:23.856 11:18:48 -- pm/common@50 -- $ sudo -E kill -TERM 3642208 00:10:23.856 11:18:48 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.856 11:18:48 -- nvmf/common.sh@7 -- # uname -s 00:10:23.856 11:18:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.856 11:18:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.856 11:18:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.856 11:18:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.856 11:18:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.856 11:18:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.856 11:18:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.856 11:18:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.856 11:18:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.856 11:18:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.856 11:18:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:10:23.856 11:18:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:10:23.856 11:18:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.856 11:18:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.856 11:18:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.856 11:18:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.856 11:18:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.856 11:18:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.856 11:18:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.856 11:18:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.856 11:18:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.856 11:18:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.856 11:18:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.856 11:18:48 -- paths/export.sh@5 -- # export PATH 00:10:23.856 11:18:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.856 11:18:48 -- nvmf/common.sh@47 -- # : 0 00:10:23.856 11:18:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.856 11:18:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.856 11:18:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.856 11:18:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.856 11:18:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.856 11:18:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.856 11:18:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.856 11:18:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.856 11:18:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:10:23.856 11:18:48 -- spdk/autotest.sh@32 -- # uname -s 00:10:23.856 11:18:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:10:23.856 11:18:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:10:23.856 11:18:48 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:10:23.856 11:18:48 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:10:23.856 11:18:48 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:10:23.856 11:18:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:10:23.856 11:18:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:10:23.856 11:18:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:10:23.856 11:18:48 -- spdk/autotest.sh@48 -- # udevadm_pid=3704661 00:10:23.856 11:18:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:10:23.856 11:18:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:10:23.856 11:18:48 -- pm/common@17 -- # local monitor 00:10:23.856 11:18:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:23.856 11:18:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:23.856 11:18:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:23.856 11:18:48 -- pm/common@21 -- # date +%s 00:10:23.856 11:18:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:23.856 11:18:48 -- pm/common@21 -- # date +%s 00:10:23.856 11:18:48 -- pm/common@25 -- # sleep 1 00:10:23.856 11:18:48 -- pm/common@21 -- # date +%s 00:10:23.856 11:18:48 -- pm/common@21 -- # date +%s 00:10:23.856 11:18:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718011128 00:10:23.856 11:18:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718011128 00:10:23.856 11:18:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718011128 00:10:23.856 11:18:48 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718011128 00:10:24.115 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718011128_collect-vmstat.pm.log 00:10:24.115 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718011128_collect-cpu-load.pm.log 00:10:24.115 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718011128_collect-cpu-temp.pm.log 00:10:24.115 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718011128_collect-bmc-pm.bmc.pm.log 00:10:25.053 11:18:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:25.053 11:18:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:25.053 11:18:49 -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:25.053 11:18:49 -- common/autotest_common.sh@10 -- # set +x 00:10:25.053 11:18:49 -- spdk/autotest.sh@59 -- # create_test_list 00:10:25.053 11:18:49 -- common/autotest_common.sh@747 -- # xtrace_disable 00:10:25.053 11:18:49 -- common/autotest_common.sh@10 -- # set +x 00:10:25.053 11:18:49 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:10:25.053 11:18:49 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:25.053 11:18:49 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:25.053 11:18:49 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:10:25.053 11:18:49 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:25.053 11:18:49 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:25.053 11:18:49 -- common/autotest_common.sh@1454 -- # uname 00:10:25.053 11:18:50 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:10:25.053 11:18:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:25.053 11:18:50 -- common/autotest_common.sh@1474 -- # uname 00:10:25.053 11:18:50 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:10:25.053 11:18:50 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:10:25.053 11:18:50 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:10:25.053 11:18:50 -- spdk/autotest.sh@72 -- # hash lcov 00:10:25.053 11:18:50 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:10:25.053 11:18:50 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:10:25.053 --rc lcov_branch_coverage=1 00:10:25.053 --rc lcov_function_coverage=1 00:10:25.053 --rc genhtml_branch_coverage=1 00:10:25.053 --rc genhtml_function_coverage=1 00:10:25.053 --rc genhtml_legend=1 00:10:25.053 --rc geninfo_all_blocks=1 00:10:25.053 ' 00:10:25.053 11:18:50 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:10:25.053 --rc lcov_branch_coverage=1 00:10:25.053 --rc lcov_function_coverage=1 00:10:25.053 --rc genhtml_branch_coverage=1 00:10:25.053 --rc genhtml_function_coverage=1 00:10:25.053 --rc genhtml_legend=1 00:10:25.053 --rc geninfo_all_blocks=1 00:10:25.053 ' 00:10:25.053 11:18:50 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:10:25.053 --rc lcov_branch_coverage=1 00:10:25.053 --rc lcov_function_coverage=1 00:10:25.053 --rc genhtml_branch_coverage=1 00:10:25.053 --rc genhtml_function_coverage=1 00:10:25.053 --rc genhtml_legend=1 00:10:25.053 --rc geninfo_all_blocks=1 00:10:25.053 --no-external' 00:10:25.053 11:18:50 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:10:25.053 --rc lcov_branch_coverage=1 00:10:25.053 --rc lcov_function_coverage=1 00:10:25.053 --rc genhtml_branch_coverage=1 00:10:25.053 --rc genhtml_function_coverage=1 00:10:25.053 --rc genhtml_legend=1 00:10:25.053 --rc geninfo_all_blocks=1 00:10:25.053 --no-external' 00:10:25.053 11:18:50 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:10:25.053 lcov: LCOV version 1.14 00:10:25.053 11:18:50 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:10:39.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:39.964 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:10:54.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:10:54.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:10:54.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:10:54.906 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:10:54.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:10:54.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:10:54.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:10:54.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:10:54.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:10:54.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:10:54.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:10:54.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:10:54.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:10:54.907 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:10:55.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:10:55.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:10:55.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:10:55.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:10:55.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:10:55.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:10:55.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:10:55.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:10:55.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:10:55.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:10:55.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:10:55.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:10:55.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:10:55.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:10:55.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:10:55.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:10:55.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:10:55.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:10:55.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:10:55.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:10:55.426 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:10:55.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:10:55.685 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:10:55.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:10:55.945 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:10:55.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:10:55.945 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:10:55.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:10:55.945 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:10:55.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:10:55.945 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:10:55.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:10:55.945 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:10:57.845 11:19:22 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:10:57.845 11:19:22 -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:57.845 11:19:22 -- common/autotest_common.sh@10 -- # set +x 00:10:57.845 11:19:22 -- spdk/autotest.sh@91 -- # rm -f 00:10:57.845 11:19:22 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:11:02.037 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:11:02.037 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:11:02.037 11:19:26 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:11:02.037 11:19:26 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:11:02.037 11:19:26 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:11:02.037 11:19:26 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:11:02.037 11:19:26 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:11:02.037 11:19:26 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:11:02.037 11:19:26 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:11:02.037 11:19:26 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:02.037 11:19:26 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:11:02.037 11:19:26 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:11:02.037 11:19:26 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:02.037 11:19:26 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:02.037 11:19:26 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:11:02.037 11:19:26 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:11:02.037 11:19:26 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:02.037 No valid GPT data, bailing 00:11:02.037 11:19:26 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:02.037 11:19:26 -- scripts/common.sh@391 -- # pt= 00:11:02.037 11:19:26 -- scripts/common.sh@392 -- # return 1 00:11:02.037 11:19:26 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:11:02.037 1+0 records in 00:11:02.037 1+0 records out 00:11:02.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00184422 s, 569 MB/s 00:11:02.037 11:19:26 -- spdk/autotest.sh@118 -- # sync 00:11:02.037 11:19:26 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:11:02.037 11:19:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:11:02.037 11:19:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:11:10.228 11:19:34 -- spdk/autotest.sh@124 -- # uname -s 00:11:10.228 11:19:34 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:11:10.228 11:19:34 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:11:10.228 11:19:34 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:10.228 11:19:34 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:10.228 11:19:34 -- common/autotest_common.sh@10 -- # set +x 00:11:10.228 ************************************ 00:11:10.228 START TEST setup.sh 00:11:10.228 ************************************ 00:11:10.228 11:19:34 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:11:10.228 * Looking for test storage... 00:11:10.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:11:10.228 11:19:34 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:11:10.228 11:19:34 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:11:10.228 11:19:34 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:11:10.228 11:19:34 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:10.228 11:19:34 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:10.228 11:19:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:10.228 ************************************ 00:11:10.228 START TEST acl 00:11:10.228 ************************************ 00:11:10.228 11:19:34 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:11:10.228 * Looking for test storage... 00:11:10.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:11:10.228 11:19:34 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:11:10.228 11:19:34 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:11:10.228 11:19:34 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:11:10.228 11:19:34 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:11:10.228 11:19:34 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:11:10.228 11:19:34 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:11:10.228 11:19:34 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:11:10.228 11:19:34 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:10.228 11:19:34 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:11:10.228 11:19:34 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:11:10.228 11:19:34 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:11:10.228 11:19:34 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:11:10.228 11:19:34 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:11:10.228 11:19:34 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:11:10.228 11:19:34 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:10.228 11:19:34 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:11:14.436 11:19:38 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:11:14.436 11:19:38 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:11:14.436 11:19:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:14.436 11:19:38 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:11:14.436 11:19:38 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:11:14.436 11:19:38 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:11:17.726 Hugepages 00:11:17.726 node hugesize free / total 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 00:11:17.726 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:11:17.726 11:19:42 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:11:17.726 11:19:42 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:17.726 11:19:42 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:17.726 11:19:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:17.726 ************************************ 00:11:17.726 START TEST denied 00:11:17.726 ************************************ 00:11:17.726 11:19:42 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:11:17.726 11:19:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:11:17.726 11:19:42 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:11:17.726 11:19:42 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:11:17.726 11:19:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:11:17.726 11:19:42 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:23.000 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:11:23.000 11:19:47 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:11:23.000 11:19:47 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:11:23.000 11:19:47 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:11:23.000 11:19:47 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:11:23.000 11:19:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:11:23.000 11:19:47 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:23.000 11:19:47 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:23.000 11:19:47 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:11:23.000 11:19:47 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:23.000 11:19:47 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:11:28.277 00:11:28.277 real 0m9.651s 00:11:28.277 user 0m2.986s 00:11:28.277 sys 0m5.859s 00:11:28.277 11:19:52 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:28.277 11:19:52 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:11:28.277 ************************************ 00:11:28.277 END TEST denied 00:11:28.277 ************************************ 00:11:28.277 11:19:52 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:11:28.277 11:19:52 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:28.277 11:19:52 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:28.277 11:19:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:28.277 ************************************ 00:11:28.277 START TEST allowed 00:11:28.277 ************************************ 00:11:28.277 11:19:52 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:11:28.277 11:19:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:11:28.277 11:19:52 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:11:28.277 11:19:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:11:28.277 11:19:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:11:28.277 11:19:52 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:11:33.555 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:11:33.555 11:19:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:11:33.555 11:19:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:11:33.555 11:19:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:11:33.555 11:19:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:33.555 11:19:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:11:37.775 00:11:37.775 real 0m10.245s 00:11:37.775 user 0m3.028s 00:11:37.775 sys 0m5.915s 00:11:37.775 11:20:02 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:37.775 11:20:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:11:37.775 ************************************ 00:11:37.775 END TEST allowed 00:11:37.775 ************************************ 00:11:37.775 00:11:37.775 real 0m28.397s 00:11:37.775 user 0m8.812s 00:11:37.775 sys 0m17.558s 00:11:37.775 11:20:02 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:37.775 11:20:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:37.775 ************************************ 00:11:37.775 END TEST acl 00:11:37.775 ************************************ 00:11:37.775 11:20:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:11:37.775 11:20:02 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:37.775 11:20:02 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:37.775 11:20:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:37.775 ************************************ 00:11:37.775 START TEST hugepages 00:11:37.775 ************************************ 00:11:37.775 11:20:02 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:11:38.038 * Looking for test storage... 00:11:38.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 38926360 kB' 'MemAvailable: 40860468 kB' 'Buffers: 3168 kB' 'Cached: 12828812 kB' 'SwapCached: 296 kB' 'Active: 10172392 kB' 'Inactive: 3283876 kB' 'Active(anon): 9718680 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 627296 kB' 'Mapped: 225836 kB' 'Shmem: 10491912 kB' 'KReclaimable: 498368 kB' 'Slab: 1164700 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 666332 kB' 'KernelStack: 22384 kB' 'PageTables: 9344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 12691724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219048 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.038 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:11:38.039 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:11:38.040 11:20:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:11:38.040 11:20:02 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:38.040 11:20:02 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:38.040 11:20:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:38.040 ************************************ 00:11:38.040 START TEST default_setup 00:11:38.040 ************************************ 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:11:38.040 11:20:03 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:42.231 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:11:42.231 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:11:44.142 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:44.142 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41097088 kB' 'MemAvailable: 43031196 kB' 'Buffers: 3168 kB' 'Cached: 12828952 kB' 'SwapCached: 296 kB' 'Active: 10189064 kB' 'Inactive: 3283876 kB' 'Active(anon): 9735352 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643884 kB' 'Mapped: 226012 kB' 'Shmem: 10492052 kB' 'KReclaimable: 498368 kB' 'Slab: 1163956 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665588 kB' 'KernelStack: 22416 kB' 'PageTables: 9748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12705772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218920 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.143 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41099184 kB' 'MemAvailable: 43033292 kB' 'Buffers: 3168 kB' 'Cached: 12828956 kB' 'SwapCached: 296 kB' 'Active: 10189156 kB' 'Inactive: 3283876 kB' 'Active(anon): 9735444 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643872 kB' 'Mapped: 225956 kB' 'Shmem: 10492056 kB' 'KReclaimable: 498368 kB' 'Slab: 1164004 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665636 kB' 'KernelStack: 22384 kB' 'PageTables: 9448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12705792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218888 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.144 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.145 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41099968 kB' 'MemAvailable: 43034076 kB' 'Buffers: 3168 kB' 'Cached: 12828972 kB' 'SwapCached: 296 kB' 'Active: 10189192 kB' 'Inactive: 3283876 kB' 'Active(anon): 9735480 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643948 kB' 'Mapped: 225956 kB' 'Shmem: 10492072 kB' 'KReclaimable: 498368 kB' 'Slab: 1164004 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665636 kB' 'KernelStack: 22400 kB' 'PageTables: 9796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12705812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218920 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.146 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.147 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:44.148 nr_hugepages=1024 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:44.148 resv_hugepages=0 00:11:44.148 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:44.149 surplus_hugepages=0 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:44.149 anon_hugepages=0 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41099832 kB' 'MemAvailable: 43033940 kB' 'Buffers: 3168 kB' 'Cached: 12828996 kB' 'SwapCached: 296 kB' 'Active: 10189212 kB' 'Inactive: 3283876 kB' 'Active(anon): 9735500 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643884 kB' 'Mapped: 225956 kB' 'Shmem: 10492096 kB' 'KReclaimable: 498368 kB' 'Slab: 1164004 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665636 kB' 'KernelStack: 22416 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12705832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218952 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.149 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:44.150 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21903828 kB' 'MemUsed: 10735312 kB' 'SwapCached: 284 kB' 'Active: 6290924 kB' 'Inactive: 1185808 kB' 'Active(anon): 5997796 kB' 'Inactive(anon): 1001252 kB' 'Active(file): 293128 kB' 'Inactive(file): 184556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7051636 kB' 'Mapped: 174812 kB' 'AnonPages: 428368 kB' 'Shmem: 6573668 kB' 'KernelStack: 12888 kB' 'PageTables: 6252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175500 kB' 'Slab: 480608 kB' 'SReclaimable: 175500 kB' 'SUnreclaim: 305108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.151 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:44.152 node0=1024 expecting 1024 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:44.152 00:11:44.152 real 0m6.030s 00:11:44.152 user 0m1.697s 00:11:44.152 sys 0m2.947s 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:44.152 11:20:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:11:44.152 ************************************ 00:11:44.152 END TEST default_setup 00:11:44.152 ************************************ 00:11:44.152 11:20:09 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:11:44.152 11:20:09 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:44.152 11:20:09 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:44.152 11:20:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:44.152 ************************************ 00:11:44.152 START TEST per_node_1G_alloc 00:11:44.152 ************************************ 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:44.152 11:20:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:48.347 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:11:48.347 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41106832 kB' 'MemAvailable: 43040940 kB' 'Buffers: 3168 kB' 'Cached: 12829112 kB' 'SwapCached: 296 kB' 'Active: 10187332 kB' 'Inactive: 3283876 kB' 'Active(anon): 9733620 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641504 kB' 'Mapped: 225176 kB' 'Shmem: 10492212 kB' 'KReclaimable: 498368 kB' 'Slab: 1164356 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665988 kB' 'KernelStack: 22288 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12696340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218872 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.347 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:48.348 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41107284 kB' 'MemAvailable: 43041392 kB' 'Buffers: 3168 kB' 'Cached: 12829116 kB' 'SwapCached: 296 kB' 'Active: 10186972 kB' 'Inactive: 3283876 kB' 'Active(anon): 9733260 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641612 kB' 'Mapped: 225036 kB' 'Shmem: 10492216 kB' 'KReclaimable: 498368 kB' 'Slab: 1164332 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665964 kB' 'KernelStack: 22256 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12696360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218856 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.349 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:48.350 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41107284 kB' 'MemAvailable: 43041392 kB' 'Buffers: 3168 kB' 'Cached: 12829132 kB' 'SwapCached: 296 kB' 'Active: 10187036 kB' 'Inactive: 3283876 kB' 'Active(anon): 9733324 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641132 kB' 'Mapped: 225036 kB' 'Shmem: 10492232 kB' 'KReclaimable: 498368 kB' 'Slab: 1164332 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665964 kB' 'KernelStack: 22240 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12696380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218856 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.351 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.352 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:48.353 nr_hugepages=1024 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:48.353 resv_hugepages=0 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:48.353 surplus_hugepages=0 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:48.353 anon_hugepages=0 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41107944 kB' 'MemAvailable: 43042052 kB' 'Buffers: 3168 kB' 'Cached: 12829176 kB' 'SwapCached: 296 kB' 'Active: 10186664 kB' 'Inactive: 3283876 kB' 'Active(anon): 9732952 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641216 kB' 'Mapped: 225036 kB' 'Shmem: 10492276 kB' 'KReclaimable: 498368 kB' 'Slab: 1164332 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665964 kB' 'KernelStack: 22240 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12696404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218856 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.353 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.354 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.355 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.355 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.355 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:48.355 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:11:48.355 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:48.355 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:48.355 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:48.355 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:11:48.355 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22962104 kB' 'MemUsed: 9677036 kB' 'SwapCached: 284 kB' 'Active: 6291196 kB' 'Inactive: 1185808 kB' 'Active(anon): 5998068 kB' 'Inactive(anon): 1001252 kB' 'Active(file): 293128 kB' 'Inactive(file): 184556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7051776 kB' 'Mapped: 173960 kB' 'AnonPages: 428508 kB' 'Shmem: 6573808 kB' 'KernelStack: 12920 kB' 'PageTables: 6284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175500 kB' 'Slab: 480644 kB' 'SReclaimable: 175500 kB' 'SUnreclaim: 305144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:48.616 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.617 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 18148376 kB' 'MemUsed: 9507680 kB' 'SwapCached: 12 kB' 'Active: 3895832 kB' 'Inactive: 2098068 kB' 'Active(anon): 3735248 kB' 'Inactive(anon): 396268 kB' 'Active(file): 160584 kB' 'Inactive(file): 1701800 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5780868 kB' 'Mapped: 51076 kB' 'AnonPages: 213100 kB' 'Shmem: 3918472 kB' 'KernelStack: 9336 kB' 'PageTables: 2804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322868 kB' 'Slab: 683688 kB' 'SReclaimable: 322868 kB' 'SUnreclaim: 360820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.618 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:48.619 node0=512 expecting 512 00:11:48.619 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:48.620 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:48.620 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:48.620 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:11:48.620 node1=512 expecting 512 00:11:48.620 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:48.620 00:11:48.620 real 0m4.360s 00:11:48.620 user 0m1.603s 00:11:48.620 sys 0m2.825s 00:11:48.620 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:48.620 11:20:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:48.620 ************************************ 00:11:48.620 END TEST per_node_1G_alloc 00:11:48.620 ************************************ 00:11:48.620 11:20:13 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:11:48.620 11:20:13 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:48.620 11:20:13 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:48.620 11:20:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:48.620 ************************************ 00:11:48.620 START TEST even_2G_alloc 00:11:48.620 ************************************ 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:48.620 11:20:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:52.811 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:11:52.811 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:11:52.811 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:11:52.811 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:11:52.811 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:52.811 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:52.811 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:52.811 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:52.811 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41090592 kB' 'MemAvailable: 43024700 kB' 'Buffers: 3168 kB' 'Cached: 12829288 kB' 'SwapCached: 296 kB' 'Active: 10187796 kB' 'Inactive: 3283876 kB' 'Active(anon): 9734084 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642144 kB' 'Mapped: 225108 kB' 'Shmem: 10492388 kB' 'KReclaimable: 498368 kB' 'Slab: 1164212 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665844 kB' 'KernelStack: 22272 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12697280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218952 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.812 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41091708 kB' 'MemAvailable: 43025816 kB' 'Buffers: 3168 kB' 'Cached: 12829292 kB' 'SwapCached: 296 kB' 'Active: 10187924 kB' 'Inactive: 3283876 kB' 'Active(anon): 9734212 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642324 kB' 'Mapped: 225044 kB' 'Shmem: 10492392 kB' 'KReclaimable: 498368 kB' 'Slab: 1164180 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665812 kB' 'KernelStack: 22272 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12697300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218920 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.813 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:52.814 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.078 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41091000 kB' 'MemAvailable: 43025108 kB' 'Buffers: 3168 kB' 'Cached: 12829308 kB' 'SwapCached: 296 kB' 'Active: 10187784 kB' 'Inactive: 3283876 kB' 'Active(anon): 9734072 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642144 kB' 'Mapped: 225044 kB' 'Shmem: 10492408 kB' 'KReclaimable: 498368 kB' 'Slab: 1164180 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665812 kB' 'KernelStack: 22256 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12697320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218920 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.079 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.080 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:53.081 nr_hugepages=1024 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:53.081 resv_hugepages=0 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:53.081 surplus_hugepages=0 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:53.081 anon_hugepages=0 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:53.081 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41089828 kB' 'MemAvailable: 43023936 kB' 'Buffers: 3168 kB' 'Cached: 12829332 kB' 'SwapCached: 296 kB' 'Active: 10187996 kB' 'Inactive: 3283876 kB' 'Active(anon): 9734284 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642332 kB' 'Mapped: 225044 kB' 'Shmem: 10492432 kB' 'KReclaimable: 498368 kB' 'Slab: 1164180 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665812 kB' 'KernelStack: 22272 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12697344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218920 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.082 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.083 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:53.084 11:20:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:53.084 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22959920 kB' 'MemUsed: 9679220 kB' 'SwapCached: 284 kB' 'Active: 6292044 kB' 'Inactive: 1185808 kB' 'Active(anon): 5998916 kB' 'Inactive(anon): 1001252 kB' 'Active(file): 293128 kB' 'Inactive(file): 184556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7051940 kB' 'Mapped: 173976 kB' 'AnonPages: 429140 kB' 'Shmem: 6573972 kB' 'KernelStack: 12920 kB' 'PageTables: 6192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175500 kB' 'Slab: 480752 kB' 'SReclaimable: 175500 kB' 'SUnreclaim: 305252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.085 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:53.086 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 18131484 kB' 'MemUsed: 9524572 kB' 'SwapCached: 12 kB' 'Active: 3896020 kB' 'Inactive: 2098068 kB' 'Active(anon): 3735436 kB' 'Inactive(anon): 396268 kB' 'Active(file): 160584 kB' 'Inactive(file): 1701800 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5780876 kB' 'Mapped: 51076 kB' 'AnonPages: 213288 kB' 'Shmem: 3918480 kB' 'KernelStack: 9336 kB' 'PageTables: 2856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322868 kB' 'Slab: 683432 kB' 'SReclaimable: 322868 kB' 'SUnreclaim: 360564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.087 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:53.088 node0=512 expecting 512 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:53.088 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:53.089 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:11:53.089 node1=512 expecting 512 00:11:53.089 11:20:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:53.089 00:11:53.089 real 0m4.456s 00:11:53.089 user 0m1.668s 00:11:53.089 sys 0m2.870s 00:11:53.089 11:20:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:53.089 11:20:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:53.089 ************************************ 00:11:53.089 END TEST even_2G_alloc 00:11:53.089 ************************************ 00:11:53.089 11:20:18 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:11:53.089 11:20:18 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:53.089 11:20:18 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:53.089 11:20:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:53.089 ************************************ 00:11:53.089 START TEST odd_alloc 00:11:53.089 ************************************ 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:53.089 11:20:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:11:57.281 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:11:57.281 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.281 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41095896 kB' 'MemAvailable: 43030004 kB' 'Buffers: 3168 kB' 'Cached: 12829464 kB' 'SwapCached: 296 kB' 'Active: 10188500 kB' 'Inactive: 3283876 kB' 'Active(anon): 9734788 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642880 kB' 'Mapped: 225068 kB' 'Shmem: 10492564 kB' 'KReclaimable: 498368 kB' 'Slab: 1164024 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665656 kB' 'KernelStack: 22256 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12698220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218856 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.282 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:57.283 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41096524 kB' 'MemAvailable: 43030632 kB' 'Buffers: 3168 kB' 'Cached: 12829468 kB' 'SwapCached: 296 kB' 'Active: 10188944 kB' 'Inactive: 3283876 kB' 'Active(anon): 9735232 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643384 kB' 'Mapped: 225068 kB' 'Shmem: 10492568 kB' 'KReclaimable: 498368 kB' 'Slab: 1164120 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665752 kB' 'KernelStack: 22272 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12698240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218840 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.284 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.285 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.548 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.549 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41095768 kB' 'MemAvailable: 43029876 kB' 'Buffers: 3168 kB' 'Cached: 12829480 kB' 'SwapCached: 296 kB' 'Active: 10188964 kB' 'Inactive: 3283876 kB' 'Active(anon): 9735252 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643372 kB' 'Mapped: 225068 kB' 'Shmem: 10492580 kB' 'KReclaimable: 498368 kB' 'Slab: 1164120 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665752 kB' 'KernelStack: 22256 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12698260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218840 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.550 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.551 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:11:57.552 nr_hugepages=1025 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:57.552 resv_hugepages=0 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:57.552 surplus_hugepages=0 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:57.552 anon_hugepages=0 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41095840 kB' 'MemAvailable: 43029948 kB' 'Buffers: 3168 kB' 'Cached: 12829504 kB' 'SwapCached: 296 kB' 'Active: 10188976 kB' 'Inactive: 3283876 kB' 'Active(anon): 9735264 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643388 kB' 'Mapped: 225068 kB' 'Shmem: 10492604 kB' 'KReclaimable: 498368 kB' 'Slab: 1164120 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665752 kB' 'KernelStack: 22272 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 12698280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218840 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.552 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.553 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:57.554 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22961952 kB' 'MemUsed: 9677188 kB' 'SwapCached: 284 kB' 'Active: 6293188 kB' 'Inactive: 1185808 kB' 'Active(anon): 6000060 kB' 'Inactive(anon): 1001252 kB' 'Active(file): 293128 kB' 'Inactive(file): 184556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7052116 kB' 'Mapped: 173992 kB' 'AnonPages: 430208 kB' 'Shmem: 6574148 kB' 'KernelStack: 12920 kB' 'PageTables: 6192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175500 kB' 'Slab: 480596 kB' 'SReclaimable: 175500 kB' 'SUnreclaim: 305096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.555 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.556 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 18134268 kB' 'MemUsed: 9521788 kB' 'SwapCached: 12 kB' 'Active: 3895800 kB' 'Inactive: 2098068 kB' 'Active(anon): 3735216 kB' 'Inactive(anon): 396268 kB' 'Active(file): 160584 kB' 'Inactive(file): 1701800 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5780872 kB' 'Mapped: 51076 kB' 'AnonPages: 213172 kB' 'Shmem: 3918476 kB' 'KernelStack: 9352 kB' 'PageTables: 2904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322868 kB' 'Slab: 683524 kB' 'SReclaimable: 322868 kB' 'SUnreclaim: 360656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.557 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.558 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:11:57.559 node0=512 expecting 513 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:11:57.559 node1=513 expecting 512 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:11:57.559 00:11:57.559 real 0m4.372s 00:11:57.559 user 0m1.621s 00:11:57.559 sys 0m2.829s 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:57.559 11:20:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:57.559 ************************************ 00:11:57.559 END TEST odd_alloc 00:11:57.559 ************************************ 00:11:57.559 11:20:22 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:11:57.559 11:20:22 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:11:57.559 11:20:22 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:57.559 11:20:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:57.559 ************************************ 00:11:57.559 START TEST custom_alloc 00:11:57.559 ************************************ 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:57.559 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:57.560 11:20:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:12:01.770 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:12:01.770 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40027796 kB' 'MemAvailable: 41961904 kB' 'Buffers: 3168 kB' 'Cached: 12829624 kB' 'SwapCached: 296 kB' 'Active: 10191400 kB' 'Inactive: 3283876 kB' 'Active(anon): 9737688 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645496 kB' 'Mapped: 225584 kB' 'Shmem: 10492724 kB' 'KReclaimable: 498368 kB' 'Slab: 1163528 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665160 kB' 'KernelStack: 22256 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12701368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219032 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.770 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40021036 kB' 'MemAvailable: 41955144 kB' 'Buffers: 3168 kB' 'Cached: 12829628 kB' 'SwapCached: 296 kB' 'Active: 10195300 kB' 'Inactive: 3283876 kB' 'Active(anon): 9741588 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649428 kB' 'Mapped: 225580 kB' 'Shmem: 10492728 kB' 'KReclaimable: 498368 kB' 'Slab: 1163528 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665160 kB' 'KernelStack: 22288 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12704964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218988 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.771 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.772 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40017944 kB' 'MemAvailable: 41952052 kB' 'Buffers: 3168 kB' 'Cached: 12829632 kB' 'SwapCached: 296 kB' 'Active: 10193552 kB' 'Inactive: 3283876 kB' 'Active(anon): 9739840 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647700 kB' 'Mapped: 225580 kB' 'Shmem: 10492732 kB' 'KReclaimable: 498368 kB' 'Slab: 1163520 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665152 kB' 'KernelStack: 22288 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12703388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219000 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.773 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:01.774 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.036 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.037 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:12:02.038 nr_hugepages=1536 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:02.038 resv_hugepages=0 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:02.038 surplus_hugepages=0 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:02.038 anon_hugepages=0 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 40014428 kB' 'MemAvailable: 41948536 kB' 'Buffers: 3168 kB' 'Cached: 12829676 kB' 'SwapCached: 296 kB' 'Active: 10189992 kB' 'Inactive: 3283876 kB' 'Active(anon): 9736280 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644128 kB' 'Mapped: 225356 kB' 'Shmem: 10492776 kB' 'KReclaimable: 498368 kB' 'Slab: 1163572 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665204 kB' 'KernelStack: 22272 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 12699252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219000 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.038 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.039 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22928864 kB' 'MemUsed: 9710276 kB' 'SwapCached: 284 kB' 'Active: 6293908 kB' 'Inactive: 1185808 kB' 'Active(anon): 6000780 kB' 'Inactive(anon): 1001252 kB' 'Active(file): 293128 kB' 'Inactive(file): 184556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7052292 kB' 'Mapped: 174000 kB' 'AnonPages: 430676 kB' 'Shmem: 6574324 kB' 'KernelStack: 12888 kB' 'PageTables: 6036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175500 kB' 'Slab: 480392 kB' 'SReclaimable: 175500 kB' 'SUnreclaim: 304892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.040 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656056 kB' 'MemFree: 17086068 kB' 'MemUsed: 10569988 kB' 'SwapCached: 12 kB' 'Active: 3895956 kB' 'Inactive: 2098068 kB' 'Active(anon): 3735372 kB' 'Inactive(anon): 396268 kB' 'Active(file): 160584 kB' 'Inactive(file): 1701800 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5780884 kB' 'Mapped: 51076 kB' 'AnonPages: 213288 kB' 'Shmem: 3918488 kB' 'KernelStack: 9368 kB' 'PageTables: 2900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 322868 kB' 'Slab: 683180 kB' 'SReclaimable: 322868 kB' 'SUnreclaim: 360312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.041 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.042 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:12:02.043 node0=512 expecting 512 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:12:02.043 node1=1024 expecting 1024 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:12:02.043 00:12:02.043 real 0m4.395s 00:12:02.043 user 0m1.669s 00:12:02.043 sys 0m2.806s 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:02.043 11:20:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:02.043 ************************************ 00:12:02.043 END TEST custom_alloc 00:12:02.043 ************************************ 00:12:02.043 11:20:27 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:12:02.043 11:20:27 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:02.043 11:20:27 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:02.043 11:20:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:02.043 ************************************ 00:12:02.043 START TEST no_shrink_alloc 00:12:02.043 ************************************ 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:02.043 11:20:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:12:06.241 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:12:06.241 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41072888 kB' 'MemAvailable: 43006996 kB' 'Buffers: 3168 kB' 'Cached: 12829796 kB' 'SwapCached: 296 kB' 'Active: 10192736 kB' 'Inactive: 3283876 kB' 'Active(anon): 9739024 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645488 kB' 'Mapped: 225196 kB' 'Shmem: 10492896 kB' 'KReclaimable: 498368 kB' 'Slab: 1164500 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 666132 kB' 'KernelStack: 22368 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12702732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219208 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.241 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.242 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41073404 kB' 'MemAvailable: 43007512 kB' 'Buffers: 3168 kB' 'Cached: 12829796 kB' 'SwapCached: 296 kB' 'Active: 10193072 kB' 'Inactive: 3283876 kB' 'Active(anon): 9739360 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646340 kB' 'Mapped: 225196 kB' 'Shmem: 10492896 kB' 'KReclaimable: 498368 kB' 'Slab: 1164500 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 666132 kB' 'KernelStack: 22448 kB' 'PageTables: 9424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12702748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219160 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.243 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.244 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41074244 kB' 'MemAvailable: 43008352 kB' 'Buffers: 3168 kB' 'Cached: 12829816 kB' 'SwapCached: 296 kB' 'Active: 10192108 kB' 'Inactive: 3283876 kB' 'Active(anon): 9738396 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645820 kB' 'Mapped: 225120 kB' 'Shmem: 10492916 kB' 'KReclaimable: 498368 kB' 'Slab: 1164484 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 666116 kB' 'KernelStack: 22384 kB' 'PageTables: 9540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12702772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219128 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.245 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.246 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:06.247 nr_hugepages=1024 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:06.247 resv_hugepages=0 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:06.247 surplus_hugepages=0 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:06.247 anon_hugepages=0 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41073088 kB' 'MemAvailable: 43007196 kB' 'Buffers: 3168 kB' 'Cached: 12829836 kB' 'SwapCached: 296 kB' 'Active: 10192336 kB' 'Inactive: 3283876 kB' 'Active(anon): 9738624 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645996 kB' 'Mapped: 225120 kB' 'Shmem: 10492936 kB' 'KReclaimable: 498368 kB' 'Slab: 1164484 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 666116 kB' 'KernelStack: 22480 kB' 'PageTables: 9236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12702792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219192 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.247 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.248 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21876452 kB' 'MemUsed: 10762688 kB' 'SwapCached: 284 kB' 'Active: 6293924 kB' 'Inactive: 1185808 kB' 'Active(anon): 6000796 kB' 'Inactive(anon): 1001252 kB' 'Active(file): 293128 kB' 'Inactive(file): 184556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7052416 kB' 'Mapped: 174012 kB' 'AnonPages: 430448 kB' 'Shmem: 6574448 kB' 'KernelStack: 12968 kB' 'PageTables: 6280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175500 kB' 'Slab: 481472 kB' 'SReclaimable: 175500 kB' 'SUnreclaim: 305972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.249 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:06.250 node0=1024 expecting 1024 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:12:06.250 11:20:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:12:10.474 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:12:10.474 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:12:10.474 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41065324 kB' 'MemAvailable: 42999432 kB' 'Buffers: 3168 kB' 'Cached: 12829960 kB' 'SwapCached: 296 kB' 'Active: 10192744 kB' 'Inactive: 3283876 kB' 'Active(anon): 9739032 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646564 kB' 'Mapped: 225156 kB' 'Shmem: 10493060 kB' 'KReclaimable: 498368 kB' 'Slab: 1164128 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665760 kB' 'KernelStack: 22288 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12702748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218920 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.474 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.475 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41066324 kB' 'MemAvailable: 43000432 kB' 'Buffers: 3168 kB' 'Cached: 12829964 kB' 'SwapCached: 296 kB' 'Active: 10192892 kB' 'Inactive: 3283876 kB' 'Active(anon): 9739180 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646572 kB' 'Mapped: 225124 kB' 'Shmem: 10493064 kB' 'KReclaimable: 498368 kB' 'Slab: 1164144 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665776 kB' 'KernelStack: 22272 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12703788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218904 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.476 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.477 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41067380 kB' 'MemAvailable: 43001488 kB' 'Buffers: 3168 kB' 'Cached: 12829980 kB' 'SwapCached: 296 kB' 'Active: 10192404 kB' 'Inactive: 3283876 kB' 'Active(anon): 9738692 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646152 kB' 'Mapped: 225124 kB' 'Shmem: 10493080 kB' 'KReclaimable: 498368 kB' 'Slab: 1164144 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665776 kB' 'KernelStack: 22320 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12702412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218872 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.478 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.479 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:12:10.480 nr_hugepages=1024 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:12:10.480 resv_hugepages=0 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:12:10.480 surplus_hugepages=0 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:12:10.480 anon_hugepages=0 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295196 kB' 'MemFree: 41071584 kB' 'MemAvailable: 43005692 kB' 'Buffers: 3168 kB' 'Cached: 12830004 kB' 'SwapCached: 296 kB' 'Active: 10193448 kB' 'Inactive: 3283876 kB' 'Active(anon): 9739736 kB' 'Inactive(anon): 1397520 kB' 'Active(file): 453712 kB' 'Inactive(file): 1886356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8284156 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647324 kB' 'Mapped: 225124 kB' 'Shmem: 10493104 kB' 'KReclaimable: 498368 kB' 'Slab: 1164144 kB' 'SReclaimable: 498368 kB' 'SUnreclaim: 665776 kB' 'KernelStack: 22384 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 12704052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218968 kB' 'VmallocChunk: 0 kB' 'Percpu: 101248 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4171124 kB' 'DirectMap2M: 57380864 kB' 'DirectMap1G: 7340032 kB' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.480 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.481 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:12:10.482 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:12:10.742 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:12:10.742 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:12:10.742 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:12:10.742 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:12:10.742 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:12:10.742 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:12:10.742 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:12:10.742 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:12:10.742 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21886308 kB' 'MemUsed: 10752832 kB' 'SwapCached: 284 kB' 'Active: 6293144 kB' 'Inactive: 1185808 kB' 'Active(anon): 6000016 kB' 'Inactive(anon): 1001252 kB' 'Active(file): 293128 kB' 'Inactive(file): 184556 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7052532 kB' 'Mapped: 174016 kB' 'AnonPages: 429552 kB' 'Shmem: 6574564 kB' 'KernelStack: 12904 kB' 'PageTables: 6144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 175500 kB' 'Slab: 481068 kB' 'SReclaimable: 175500 kB' 'SUnreclaim: 305568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.743 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:12:10.744 node0=1024 expecting 1024 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:12:10.744 00:12:10.744 real 0m8.537s 00:12:10.744 user 0m3.121s 00:12:10.744 sys 0m5.509s 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:10.744 11:20:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:12:10.744 ************************************ 00:12:10.744 END TEST no_shrink_alloc 00:12:10.744 ************************************ 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:12:10.744 11:20:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:12:10.744 00:12:10.744 real 0m32.840s 00:12:10.744 user 0m11.655s 00:12:10.744 sys 0m20.253s 00:12:10.744 11:20:35 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:10.744 11:20:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:12:10.744 ************************************ 00:12:10.744 END TEST hugepages 00:12:10.744 ************************************ 00:12:10.744 11:20:35 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:12:10.744 11:20:35 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:10.744 11:20:35 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:10.744 11:20:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:10.744 ************************************ 00:12:10.744 START TEST driver 00:12:10.744 ************************************ 00:12:10.744 11:20:35 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:12:11.004 * Looking for test storage... 00:12:11.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:12:11.004 11:20:35 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:12:11.004 11:20:35 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:11.004 11:20:35 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:12:17.603 11:20:41 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:12:17.603 11:20:41 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:17.603 11:20:41 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:17.603 11:20:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:12:17.603 ************************************ 00:12:17.603 START TEST guess_driver 00:12:17.603 ************************************ 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:12:17.603 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:12:17.604 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:12:17.604 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:12:17.604 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:12:17.604 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:12:17.604 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:12:17.604 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:12:17.604 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:12:17.604 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:12:17.604 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:12:17.604 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:12:17.604 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:12:17.604 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:12:17.604 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:12:17.604 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:12:17.604 Looking for driver=vfio-pci 00:12:17.604 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:17.604 11:20:41 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:12:17.604 11:20:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:12:17.604 11:20:41 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:20.895 11:20:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:22.273 11:20:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:22.273 11:20:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:12:22.273 11:20:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:22.273 11:20:47 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:12:22.273 11:20:47 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:12:22.273 11:20:47 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:22.273 11:20:47 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:12:28.844 00:12:28.844 real 0m11.280s 00:12:28.844 user 0m2.998s 00:12:28.844 sys 0m5.987s 00:12:28.844 11:20:52 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:28.844 11:20:52 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:12:28.844 ************************************ 00:12:28.844 END TEST guess_driver 00:12:28.844 ************************************ 00:12:28.844 00:12:28.844 real 0m17.057s 00:12:28.844 user 0m4.638s 00:12:28.844 sys 0m9.294s 00:12:28.844 11:20:52 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:28.844 11:20:52 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:12:28.844 ************************************ 00:12:28.844 END TEST driver 00:12:28.844 ************************************ 00:12:28.844 11:20:52 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:12:28.844 11:20:52 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:28.844 11:20:52 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:28.844 11:20:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:28.844 ************************************ 00:12:28.844 START TEST devices 00:12:28.844 ************************************ 00:12:28.844 11:20:52 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:12:28.844 * Looking for test storage... 00:12:28.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:12:28.844 11:20:52 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:12:28.844 11:20:52 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:12:28.844 11:20:52 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:28.844 11:20:52 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:12:33.039 11:20:57 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:12:33.039 11:20:57 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:12:33.039 11:20:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:12:33.039 11:20:57 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:12:33.039 11:20:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:12:33.039 11:20:57 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:12:33.039 11:20:57 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:33.039 11:20:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:12:33.039 11:20:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:12:33.039 11:20:57 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:12:33.039 No valid GPT data, bailing 00:12:33.039 11:20:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:33.039 11:20:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:33.039 11:20:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:12:33.039 11:20:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:33.039 11:20:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:33.039 11:20:57 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:12:33.039 11:20:57 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:12:33.039 11:20:57 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:33.039 11:20:57 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:33.039 11:20:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:33.039 ************************************ 00:12:33.039 START TEST nvme_mount 00:12:33.039 ************************************ 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:33.039 11:20:57 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:12:33.607 Creating new GPT entries in memory. 00:12:33.607 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:33.607 other utilities. 00:12:33.607 11:20:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:12:33.607 11:20:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:33.607 11:20:58 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:33.607 11:20:58 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:33.607 11:20:58 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:12:34.985 Creating new GPT entries in memory. 00:12:34.985 The operation has completed successfully. 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3745694 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:34.985 11:20:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:39.177 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:39.177 11:21:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:39.177 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:12:39.177 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:12:39.177 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:39.177 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:39.177 11:21:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:43.367 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:43.368 11:21:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:47.560 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:47.560 00:12:47.560 real 0m14.613s 00:12:47.560 user 0m4.195s 00:12:47.560 sys 0m8.357s 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:47.560 11:21:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:12:47.560 ************************************ 00:12:47.560 END TEST nvme_mount 00:12:47.560 ************************************ 00:12:47.560 11:21:12 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:12:47.560 11:21:12 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:12:47.560 11:21:12 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:47.560 11:21:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:47.560 ************************************ 00:12:47.560 START TEST dm_mount 00:12:47.560 ************************************ 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:47.560 11:21:12 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:12:48.496 Creating new GPT entries in memory. 00:12:48.496 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:48.496 other utilities. 00:12:48.496 11:21:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:12:48.496 11:21:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:48.496 11:21:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:48.496 11:21:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:48.496 11:21:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:12:49.433 Creating new GPT entries in memory. 00:12:49.433 The operation has completed successfully. 00:12:49.433 11:21:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:49.433 11:21:14 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:49.433 11:21:14 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:49.433 11:21:14 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:49.433 11:21:14 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:12:50.370 The operation has completed successfully. 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3751486 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:50.370 11:21:15 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:12:50.628 11:21:15 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:12:50.628 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:12:50.628 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:12:50.628 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:12:50.628 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:12:50.629 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:12:50.629 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:12:50.629 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:12:50.629 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:12:50.629 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:12:50.629 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:50.629 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:12:50.629 11:21:15 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:12:50.629 11:21:15 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:50.629 11:21:15 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.822 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:54.823 11:21:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:12:58.110 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.110 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.110 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.110 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.110 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.110 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.110 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.110 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.110 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.110 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.110 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:12:58.111 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:58.370 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:58.370 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:58.370 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:12:58.370 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:12:58.370 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:12:58.370 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:58.370 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:12:58.370 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:58.370 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:12:58.370 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:58.370 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:58.370 11:21:23 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:12:58.370 00:12:58.370 real 0m11.077s 00:12:58.370 user 0m2.736s 00:12:58.370 sys 0m5.410s 00:12:58.370 11:21:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:58.370 11:21:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:12:58.370 ************************************ 00:12:58.370 END TEST dm_mount 00:12:58.370 ************************************ 00:12:58.370 11:21:23 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:12:58.370 11:21:23 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:12:58.370 11:21:23 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:12:58.370 11:21:23 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:58.370 11:21:23 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:58.370 11:21:23 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:58.370 11:21:23 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:58.629 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:12:58.629 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:12:58.629 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:58.629 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:58.629 11:21:23 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:12:58.629 11:21:23 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:12:58.629 11:21:23 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:58.629 11:21:23 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:58.629 11:21:23 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:58.629 11:21:23 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:12:58.629 11:21:23 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:12:58.629 00:12:58.629 real 0m30.840s 00:12:58.629 user 0m8.647s 00:12:58.629 sys 0m17.126s 00:12:58.629 11:21:23 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:58.629 11:21:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:58.629 ************************************ 00:12:58.629 END TEST devices 00:12:58.629 ************************************ 00:12:58.961 00:12:58.961 real 1m49.563s 00:12:58.961 user 0m33.902s 00:12:58.961 sys 1m4.549s 00:12:58.961 11:21:23 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:58.961 11:21:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:58.961 ************************************ 00:12:58.961 END TEST setup.sh 00:12:58.961 ************************************ 00:12:58.961 11:21:23 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:13:03.154 Hugepages 00:13:03.154 node hugesize free / total 00:13:03.154 node0 1048576kB 0 / 0 00:13:03.154 node0 2048kB 2048 / 2048 00:13:03.154 node1 1048576kB 0 / 0 00:13:03.154 node1 2048kB 0 / 0 00:13:03.154 00:13:03.154 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:03.154 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:13:03.154 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:13:03.154 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:13:03.154 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:13:03.154 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:13:03.154 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:13:03.154 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:13:03.154 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:13:03.154 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:13:03.154 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:13:03.154 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:13:03.154 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:13:03.154 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:13:03.154 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:13:03.154 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:13:03.154 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:13:03.154 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:13:03.154 11:21:27 -- spdk/autotest.sh@130 -- # uname -s 00:13:03.154 11:21:27 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:13:03.154 11:21:27 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:13:03.154 11:21:27 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:13:07.347 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:13:07.347 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:13:08.725 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:13:08.984 11:21:33 -- common/autotest_common.sh@1531 -- # sleep 1 00:13:09.924 11:21:34 -- common/autotest_common.sh@1532 -- # bdfs=() 00:13:09.924 11:21:34 -- common/autotest_common.sh@1532 -- # local bdfs 00:13:09.924 11:21:34 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:13:09.924 11:21:34 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:13:09.924 11:21:34 -- common/autotest_common.sh@1512 -- # bdfs=() 00:13:09.924 11:21:34 -- common/autotest_common.sh@1512 -- # local bdfs 00:13:09.924 11:21:34 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:09.924 11:21:34 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:13:09.924 11:21:34 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:13:09.924 11:21:34 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:13:09.924 11:21:34 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:13:09.924 11:21:34 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:13:13.213 Waiting for block devices as requested 00:13:13.473 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:13:13.473 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:13:13.473 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:13:13.732 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:13:13.732 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:13:13.732 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:13:13.992 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:13:13.992 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:13:13.992 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:13:14.251 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:13:14.251 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:13:14.251 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:13:14.510 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:13:14.510 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:13:14.510 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:13:14.769 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:13:14.769 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:13:15.028 11:21:39 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:13:15.028 11:21:39 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:13:15.028 11:21:39 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:13:15.028 11:21:39 -- common/autotest_common.sh@1501 -- # grep 0000:d8:00.0/nvme/nvme 00:13:15.028 11:21:39 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:13:15.028 11:21:39 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:13:15.028 11:21:39 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:13:15.028 11:21:39 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:13:15.028 11:21:39 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:13:15.028 11:21:39 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:13:15.028 11:21:39 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:13:15.028 11:21:39 -- common/autotest_common.sh@1544 -- # grep oacs 00:13:15.028 11:21:39 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:13:15.028 11:21:39 -- common/autotest_common.sh@1544 -- # oacs=' 0xe' 00:13:15.028 11:21:39 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:13:15.028 11:21:39 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:13:15.028 11:21:39 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:13:15.028 11:21:39 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:13:15.028 11:21:39 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:13:15.028 11:21:39 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:13:15.028 11:21:39 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:13:15.028 11:21:39 -- common/autotest_common.sh@1556 -- # continue 00:13:15.028 11:21:39 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:13:15.028 11:21:39 -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:15.028 11:21:39 -- common/autotest_common.sh@10 -- # set +x 00:13:15.028 11:21:40 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:13:15.028 11:21:40 -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:15.028 11:21:40 -- common/autotest_common.sh@10 -- # set +x 00:13:15.028 11:21:40 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:13:19.219 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:13:19.219 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:13:19.219 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:13:19.219 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:13:19.219 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:13:19.219 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:13:19.219 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:13:19.220 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:13:19.220 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:13:19.220 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:13:19.220 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:13:19.220 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:13:19.220 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:13:19.220 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:13:19.220 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:13:19.220 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:13:21.125 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:13:21.125 11:21:45 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:13:21.125 11:21:45 -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:21.125 11:21:45 -- common/autotest_common.sh@10 -- # set +x 00:13:21.125 11:21:45 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:13:21.125 11:21:45 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:13:21.125 11:21:45 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:13:21.125 11:21:45 -- common/autotest_common.sh@1576 -- # bdfs=() 00:13:21.125 11:21:45 -- common/autotest_common.sh@1576 -- # local bdfs 00:13:21.125 11:21:45 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:13:21.125 11:21:45 -- common/autotest_common.sh@1512 -- # bdfs=() 00:13:21.125 11:21:45 -- common/autotest_common.sh@1512 -- # local bdfs 00:13:21.125 11:21:45 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:21.125 11:21:45 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:13:21.125 11:21:45 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:13:21.125 11:21:46 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:13:21.125 11:21:46 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:13:21.125 11:21:46 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:13:21.125 11:21:46 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:13:21.125 11:21:46 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:13:21.125 11:21:46 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:13:21.125 11:21:46 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:13:21.125 11:21:46 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:d8:00.0 00:13:21.125 11:21:46 -- common/autotest_common.sh@1591 -- # [[ -z 0000:d8:00.0 ]] 00:13:21.125 11:21:46 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=3762739 00:13:21.125 11:21:46 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:13:21.125 11:21:46 -- common/autotest_common.sh@1597 -- # waitforlisten 3762739 00:13:21.125 11:21:46 -- common/autotest_common.sh@830 -- # '[' -z 3762739 ']' 00:13:21.125 11:21:46 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.125 11:21:46 -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:21.125 11:21:46 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.125 11:21:46 -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:21.125 11:21:46 -- common/autotest_common.sh@10 -- # set +x 00:13:21.125 [2024-06-10 11:21:46.181747] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:13:21.125 [2024-06-10 11:21:46.181816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3762739 ] 00:13:21.384 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.384 [2024-06-10 11:21:46.303753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.384 [2024-06-10 11:21:46.386274] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.319 11:21:47 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:22.319 11:21:47 -- common/autotest_common.sh@863 -- # return 0 00:13:22.319 11:21:47 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:13:22.319 11:21:47 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:13:22.319 11:21:47 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:13:25.605 nvme0n1 00:13:25.605 11:21:50 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:13:25.605 [2024-06-10 11:21:50.380043] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:13:25.605 request: 00:13:25.605 { 00:13:25.605 "nvme_ctrlr_name": "nvme0", 00:13:25.605 "password": "test", 00:13:25.605 "method": "bdev_nvme_opal_revert", 00:13:25.605 "req_id": 1 00:13:25.605 } 00:13:25.605 Got JSON-RPC error response 00:13:25.605 response: 00:13:25.605 { 00:13:25.605 "code": -32602, 00:13:25.605 "message": "Invalid parameters" 00:13:25.605 } 00:13:25.605 11:21:50 -- common/autotest_common.sh@1603 -- # true 00:13:25.605 11:21:50 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:13:25.605 11:21:50 -- common/autotest_common.sh@1607 -- # killprocess 3762739 00:13:25.605 11:21:50 -- common/autotest_common.sh@949 -- # '[' -z 3762739 ']' 00:13:25.605 11:21:50 -- common/autotest_common.sh@953 -- # kill -0 3762739 00:13:25.605 11:21:50 -- common/autotest_common.sh@954 -- # uname 00:13:25.605 11:21:50 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:25.605 11:21:50 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3762739 00:13:25.605 11:21:50 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:25.605 11:21:50 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:25.605 11:21:50 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3762739' 00:13:25.605 killing process with pid 3762739 00:13:25.605 11:21:50 -- common/autotest_common.sh@968 -- # kill 3762739 00:13:25.605 11:21:50 -- common/autotest_common.sh@973 -- # wait 3762739 00:13:28.139 11:21:52 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:13:28.139 11:21:52 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:13:28.139 11:21:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:13:28.139 11:21:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:13:28.139 11:21:52 -- spdk/autotest.sh@162 -- # timing_enter lib 00:13:28.139 11:21:52 -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:28.139 11:21:52 -- common/autotest_common.sh@10 -- # set +x 00:13:28.139 11:21:52 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:13:28.139 11:21:52 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:13:28.139 11:21:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:28.139 11:21:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:28.139 11:21:52 -- common/autotest_common.sh@10 -- # set +x 00:13:28.139 ************************************ 00:13:28.139 START TEST env 00:13:28.139 ************************************ 00:13:28.139 11:21:52 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:13:28.139 * Looking for test storage... 00:13:28.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:13:28.139 11:21:52 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:13:28.139 11:21:52 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:28.139 11:21:52 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:28.139 11:21:52 env -- common/autotest_common.sh@10 -- # set +x 00:13:28.139 ************************************ 00:13:28.140 START TEST env_memory 00:13:28.140 ************************************ 00:13:28.140 11:21:52 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:13:28.140 00:13:28.140 00:13:28.140 CUnit - A unit testing framework for C - Version 2.1-3 00:13:28.140 http://cunit.sourceforge.net/ 00:13:28.140 00:13:28.140 00:13:28.140 Suite: memory 00:13:28.140 Test: alloc and free memory map ...[2024-06-10 11:21:52.951585] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:13:28.140 passed 00:13:28.140 Test: mem map translation ...[2024-06-10 11:21:52.969800] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:13:28.140 [2024-06-10 11:21:52.969816] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:13:28.140 [2024-06-10 11:21:52.969851] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:13:28.140 [2024-06-10 11:21:52.969860] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:13:28.140 passed 00:13:28.140 Test: mem map registration ...[2024-06-10 11:21:53.004768] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:13:28.140 [2024-06-10 11:21:53.004784] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:13:28.140 passed 00:13:28.140 Test: mem map adjacent registrations ...passed 00:13:28.140 00:13:28.140 Run Summary: Type Total Ran Passed Failed Inactive 00:13:28.140 suites 1 1 n/a 0 0 00:13:28.140 tests 4 4 4 0 0 00:13:28.140 asserts 152 152 152 0 n/a 00:13:28.140 00:13:28.140 Elapsed time = 0.129 seconds 00:13:28.140 00:13:28.140 real 0m0.144s 00:13:28.140 user 0m0.127s 00:13:28.140 sys 0m0.016s 00:13:28.140 11:21:53 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:28.140 11:21:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:13:28.140 ************************************ 00:13:28.140 END TEST env_memory 00:13:28.140 ************************************ 00:13:28.140 11:21:53 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:13:28.140 11:21:53 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:28.140 11:21:53 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:28.140 11:21:53 env -- common/autotest_common.sh@10 -- # set +x 00:13:28.140 ************************************ 00:13:28.140 START TEST env_vtophys 00:13:28.140 ************************************ 00:13:28.140 11:21:53 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:13:28.140 EAL: lib.eal log level changed from notice to debug 00:13:28.140 EAL: Detected lcore 0 as core 0 on socket 0 00:13:28.140 EAL: Detected lcore 1 as core 1 on socket 0 00:13:28.140 EAL: Detected lcore 2 as core 2 on socket 0 00:13:28.140 EAL: Detected lcore 3 as core 3 on socket 0 00:13:28.140 EAL: Detected lcore 4 as core 4 on socket 0 00:13:28.140 EAL: Detected lcore 5 as core 5 on socket 0 00:13:28.140 EAL: Detected lcore 6 as core 6 on socket 0 00:13:28.140 EAL: Detected lcore 7 as core 8 on socket 0 00:13:28.140 EAL: Detected lcore 8 as core 9 on socket 0 00:13:28.140 EAL: Detected lcore 9 as core 10 on socket 0 00:13:28.140 EAL: Detected lcore 10 as core 11 on socket 0 00:13:28.140 EAL: Detected lcore 11 as core 12 on socket 0 00:13:28.140 EAL: Detected lcore 12 as core 13 on socket 0 00:13:28.140 EAL: Detected lcore 13 as core 14 on socket 0 00:13:28.140 EAL: Detected lcore 14 as core 16 on socket 0 00:13:28.140 EAL: Detected lcore 15 as core 17 on socket 0 00:13:28.140 EAL: Detected lcore 16 as core 18 on socket 0 00:13:28.140 EAL: Detected lcore 17 as core 19 on socket 0 00:13:28.140 EAL: Detected lcore 18 as core 20 on socket 0 00:13:28.140 EAL: Detected lcore 19 as core 21 on socket 0 00:13:28.140 EAL: Detected lcore 20 as core 22 on socket 0 00:13:28.140 EAL: Detected lcore 21 as core 24 on socket 0 00:13:28.140 EAL: Detected lcore 22 as core 25 on socket 0 00:13:28.140 EAL: Detected lcore 23 as core 26 on socket 0 00:13:28.140 EAL: Detected lcore 24 as core 27 on socket 0 00:13:28.140 EAL: Detected lcore 25 as core 28 on socket 0 00:13:28.140 EAL: Detected lcore 26 as core 29 on socket 0 00:13:28.140 EAL: Detected lcore 27 as core 30 on socket 0 00:13:28.140 EAL: Detected lcore 28 as core 0 on socket 1 00:13:28.140 EAL: Detected lcore 29 as core 1 on socket 1 00:13:28.140 EAL: Detected lcore 30 as core 2 on socket 1 00:13:28.140 EAL: Detected lcore 31 as core 3 on socket 1 00:13:28.140 EAL: Detected lcore 32 as core 4 on socket 1 00:13:28.140 EAL: Detected lcore 33 as core 5 on socket 1 00:13:28.140 EAL: Detected lcore 34 as core 6 on socket 1 00:13:28.140 EAL: Detected lcore 35 as core 8 on socket 1 00:13:28.140 EAL: Detected lcore 36 as core 9 on socket 1 00:13:28.140 EAL: Detected lcore 37 as core 10 on socket 1 00:13:28.140 EAL: Detected lcore 38 as core 11 on socket 1 00:13:28.140 EAL: Detected lcore 39 as core 12 on socket 1 00:13:28.140 EAL: Detected lcore 40 as core 13 on socket 1 00:13:28.140 EAL: Detected lcore 41 as core 14 on socket 1 00:13:28.140 EAL: Detected lcore 42 as core 16 on socket 1 00:13:28.140 EAL: Detected lcore 43 as core 17 on socket 1 00:13:28.140 EAL: Detected lcore 44 as core 18 on socket 1 00:13:28.140 EAL: Detected lcore 45 as core 19 on socket 1 00:13:28.140 EAL: Detected lcore 46 as core 20 on socket 1 00:13:28.140 EAL: Detected lcore 47 as core 21 on socket 1 00:13:28.140 EAL: Detected lcore 48 as core 22 on socket 1 00:13:28.140 EAL: Detected lcore 49 as core 24 on socket 1 00:13:28.140 EAL: Detected lcore 50 as core 25 on socket 1 00:13:28.140 EAL: Detected lcore 51 as core 26 on socket 1 00:13:28.140 EAL: Detected lcore 52 as core 27 on socket 1 00:13:28.140 EAL: Detected lcore 53 as core 28 on socket 1 00:13:28.140 EAL: Detected lcore 54 as core 29 on socket 1 00:13:28.140 EAL: Detected lcore 55 as core 30 on socket 1 00:13:28.140 EAL: Detected lcore 56 as core 0 on socket 0 00:13:28.140 EAL: Detected lcore 57 as core 1 on socket 0 00:13:28.140 EAL: Detected lcore 58 as core 2 on socket 0 00:13:28.140 EAL: Detected lcore 59 as core 3 on socket 0 00:13:28.140 EAL: Detected lcore 60 as core 4 on socket 0 00:13:28.140 EAL: Detected lcore 61 as core 5 on socket 0 00:13:28.140 EAL: Detected lcore 62 as core 6 on socket 0 00:13:28.140 EAL: Detected lcore 63 as core 8 on socket 0 00:13:28.140 EAL: Detected lcore 64 as core 9 on socket 0 00:13:28.140 EAL: Detected lcore 65 as core 10 on socket 0 00:13:28.140 EAL: Detected lcore 66 as core 11 on socket 0 00:13:28.140 EAL: Detected lcore 67 as core 12 on socket 0 00:13:28.140 EAL: Detected lcore 68 as core 13 on socket 0 00:13:28.140 EAL: Detected lcore 69 as core 14 on socket 0 00:13:28.140 EAL: Detected lcore 70 as core 16 on socket 0 00:13:28.140 EAL: Detected lcore 71 as core 17 on socket 0 00:13:28.140 EAL: Detected lcore 72 as core 18 on socket 0 00:13:28.140 EAL: Detected lcore 73 as core 19 on socket 0 00:13:28.140 EAL: Detected lcore 74 as core 20 on socket 0 00:13:28.140 EAL: Detected lcore 75 as core 21 on socket 0 00:13:28.140 EAL: Detected lcore 76 as core 22 on socket 0 00:13:28.140 EAL: Detected lcore 77 as core 24 on socket 0 00:13:28.140 EAL: Detected lcore 78 as core 25 on socket 0 00:13:28.140 EAL: Detected lcore 79 as core 26 on socket 0 00:13:28.140 EAL: Detected lcore 80 as core 27 on socket 0 00:13:28.140 EAL: Detected lcore 81 as core 28 on socket 0 00:13:28.140 EAL: Detected lcore 82 as core 29 on socket 0 00:13:28.140 EAL: Detected lcore 83 as core 30 on socket 0 00:13:28.140 EAL: Detected lcore 84 as core 0 on socket 1 00:13:28.140 EAL: Detected lcore 85 as core 1 on socket 1 00:13:28.140 EAL: Detected lcore 86 as core 2 on socket 1 00:13:28.140 EAL: Detected lcore 87 as core 3 on socket 1 00:13:28.140 EAL: Detected lcore 88 as core 4 on socket 1 00:13:28.140 EAL: Detected lcore 89 as core 5 on socket 1 00:13:28.140 EAL: Detected lcore 90 as core 6 on socket 1 00:13:28.140 EAL: Detected lcore 91 as core 8 on socket 1 00:13:28.140 EAL: Detected lcore 92 as core 9 on socket 1 00:13:28.140 EAL: Detected lcore 93 as core 10 on socket 1 00:13:28.140 EAL: Detected lcore 94 as core 11 on socket 1 00:13:28.140 EAL: Detected lcore 95 as core 12 on socket 1 00:13:28.140 EAL: Detected lcore 96 as core 13 on socket 1 00:13:28.140 EAL: Detected lcore 97 as core 14 on socket 1 00:13:28.140 EAL: Detected lcore 98 as core 16 on socket 1 00:13:28.140 EAL: Detected lcore 99 as core 17 on socket 1 00:13:28.140 EAL: Detected lcore 100 as core 18 on socket 1 00:13:28.140 EAL: Detected lcore 101 as core 19 on socket 1 00:13:28.140 EAL: Detected lcore 102 as core 20 on socket 1 00:13:28.140 EAL: Detected lcore 103 as core 21 on socket 1 00:13:28.140 EAL: Detected lcore 104 as core 22 on socket 1 00:13:28.140 EAL: Detected lcore 105 as core 24 on socket 1 00:13:28.140 EAL: Detected lcore 106 as core 25 on socket 1 00:13:28.140 EAL: Detected lcore 107 as core 26 on socket 1 00:13:28.140 EAL: Detected lcore 108 as core 27 on socket 1 00:13:28.140 EAL: Detected lcore 109 as core 28 on socket 1 00:13:28.140 EAL: Detected lcore 110 as core 29 on socket 1 00:13:28.140 EAL: Detected lcore 111 as core 30 on socket 1 00:13:28.140 EAL: Maximum logical cores by configuration: 128 00:13:28.140 EAL: Detected CPU lcores: 112 00:13:28.140 EAL: Detected NUMA nodes: 2 00:13:28.141 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:13:28.141 EAL: Detected shared linkage of DPDK 00:13:28.141 EAL: No shared files mode enabled, IPC will be disabled 00:13:28.141 EAL: Bus pci wants IOVA as 'DC' 00:13:28.141 EAL: Buses did not request a specific IOVA mode. 00:13:28.141 EAL: IOMMU is available, selecting IOVA as VA mode. 00:13:28.141 EAL: Selected IOVA mode 'VA' 00:13:28.141 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.141 EAL: Probing VFIO support... 00:13:28.141 EAL: IOMMU type 1 (Type 1) is supported 00:13:28.141 EAL: IOMMU type 7 (sPAPR) is not supported 00:13:28.141 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:13:28.141 EAL: VFIO support initialized 00:13:28.141 EAL: Ask a virtual area of 0x2e000 bytes 00:13:28.141 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:13:28.141 EAL: Setting up physically contiguous memory... 00:13:28.141 EAL: Setting maximum number of open files to 524288 00:13:28.141 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:13:28.141 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:13:28.141 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:13:28.141 EAL: Ask a virtual area of 0x61000 bytes 00:13:28.141 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:13:28.141 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:28.141 EAL: Ask a virtual area of 0x400000000 bytes 00:13:28.141 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:13:28.141 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:13:28.141 EAL: Ask a virtual area of 0x61000 bytes 00:13:28.141 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:13:28.141 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:28.141 EAL: Ask a virtual area of 0x400000000 bytes 00:13:28.141 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:13:28.141 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:13:28.141 EAL: Ask a virtual area of 0x61000 bytes 00:13:28.141 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:13:28.141 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:28.141 EAL: Ask a virtual area of 0x400000000 bytes 00:13:28.141 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:13:28.141 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:13:28.141 EAL: Ask a virtual area of 0x61000 bytes 00:13:28.141 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:13:28.141 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:13:28.141 EAL: Ask a virtual area of 0x400000000 bytes 00:13:28.141 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:13:28.141 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:13:28.141 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:13:28.141 EAL: Ask a virtual area of 0x61000 bytes 00:13:28.141 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:13:28.141 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:13:28.141 EAL: Ask a virtual area of 0x400000000 bytes 00:13:28.141 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:13:28.141 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:13:28.141 EAL: Ask a virtual area of 0x61000 bytes 00:13:28.141 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:13:28.141 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:13:28.141 EAL: Ask a virtual area of 0x400000000 bytes 00:13:28.141 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:13:28.141 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:13:28.141 EAL: Ask a virtual area of 0x61000 bytes 00:13:28.141 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:13:28.141 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:13:28.141 EAL: Ask a virtual area of 0x400000000 bytes 00:13:28.141 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:13:28.141 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:13:28.141 EAL: Ask a virtual area of 0x61000 bytes 00:13:28.141 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:13:28.141 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:13:28.141 EAL: Ask a virtual area of 0x400000000 bytes 00:13:28.141 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:13:28.141 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:13:28.141 EAL: Hugepages will be freed exactly as allocated. 00:13:28.141 EAL: No shared files mode enabled, IPC is disabled 00:13:28.141 EAL: No shared files mode enabled, IPC is disabled 00:13:28.141 EAL: TSC frequency is ~2500000 KHz 00:13:28.141 EAL: Main lcore 0 is ready (tid=7f4eb5887a00;cpuset=[0]) 00:13:28.141 EAL: Trying to obtain current memory policy. 00:13:28.141 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:28.141 EAL: Restoring previous memory policy: 0 00:13:28.141 EAL: request: mp_malloc_sync 00:13:28.141 EAL: No shared files mode enabled, IPC is disabled 00:13:28.141 EAL: Heap on socket 0 was expanded by 2MB 00:13:28.141 EAL: No shared files mode enabled, IPC is disabled 00:13:28.400 EAL: No PCI address specified using 'addr=' in: bus=pci 00:13:28.400 EAL: Mem event callback 'spdk:(nil)' registered 00:13:28.400 00:13:28.400 00:13:28.400 CUnit - A unit testing framework for C - Version 2.1-3 00:13:28.400 http://cunit.sourceforge.net/ 00:13:28.400 00:13:28.400 00:13:28.400 Suite: components_suite 00:13:28.400 Test: vtophys_malloc_test ...passed 00:13:28.400 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:13:28.400 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:28.400 EAL: Restoring previous memory policy: 4 00:13:28.400 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.400 EAL: request: mp_malloc_sync 00:13:28.400 EAL: No shared files mode enabled, IPC is disabled 00:13:28.400 EAL: Heap on socket 0 was expanded by 4MB 00:13:28.400 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.400 EAL: request: mp_malloc_sync 00:13:28.400 EAL: No shared files mode enabled, IPC is disabled 00:13:28.400 EAL: Heap on socket 0 was shrunk by 4MB 00:13:28.400 EAL: Trying to obtain current memory policy. 00:13:28.400 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:28.400 EAL: Restoring previous memory policy: 4 00:13:28.400 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.400 EAL: request: mp_malloc_sync 00:13:28.400 EAL: No shared files mode enabled, IPC is disabled 00:13:28.400 EAL: Heap on socket 0 was expanded by 6MB 00:13:28.400 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.400 EAL: request: mp_malloc_sync 00:13:28.400 EAL: No shared files mode enabled, IPC is disabled 00:13:28.400 EAL: Heap on socket 0 was shrunk by 6MB 00:13:28.400 EAL: Trying to obtain current memory policy. 00:13:28.400 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:28.400 EAL: Restoring previous memory policy: 4 00:13:28.400 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.400 EAL: request: mp_malloc_sync 00:13:28.400 EAL: No shared files mode enabled, IPC is disabled 00:13:28.400 EAL: Heap on socket 0 was expanded by 10MB 00:13:28.400 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.400 EAL: request: mp_malloc_sync 00:13:28.400 EAL: No shared files mode enabled, IPC is disabled 00:13:28.400 EAL: Heap on socket 0 was shrunk by 10MB 00:13:28.400 EAL: Trying to obtain current memory policy. 00:13:28.400 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:28.400 EAL: Restoring previous memory policy: 4 00:13:28.400 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.400 EAL: request: mp_malloc_sync 00:13:28.400 EAL: No shared files mode enabled, IPC is disabled 00:13:28.400 EAL: Heap on socket 0 was expanded by 18MB 00:13:28.400 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.400 EAL: request: mp_malloc_sync 00:13:28.400 EAL: No shared files mode enabled, IPC is disabled 00:13:28.400 EAL: Heap on socket 0 was shrunk by 18MB 00:13:28.400 EAL: Trying to obtain current memory policy. 00:13:28.400 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:28.400 EAL: Restoring previous memory policy: 4 00:13:28.400 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.400 EAL: request: mp_malloc_sync 00:13:28.400 EAL: No shared files mode enabled, IPC is disabled 00:13:28.400 EAL: Heap on socket 0 was expanded by 34MB 00:13:28.400 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.400 EAL: request: mp_malloc_sync 00:13:28.400 EAL: No shared files mode enabled, IPC is disabled 00:13:28.400 EAL: Heap on socket 0 was shrunk by 34MB 00:13:28.400 EAL: Trying to obtain current memory policy. 00:13:28.400 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:28.400 EAL: Restoring previous memory policy: 4 00:13:28.400 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.400 EAL: request: mp_malloc_sync 00:13:28.400 EAL: No shared files mode enabled, IPC is disabled 00:13:28.400 EAL: Heap on socket 0 was expanded by 66MB 00:13:28.400 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.400 EAL: request: mp_malloc_sync 00:13:28.400 EAL: No shared files mode enabled, IPC is disabled 00:13:28.401 EAL: Heap on socket 0 was shrunk by 66MB 00:13:28.401 EAL: Trying to obtain current memory policy. 00:13:28.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:28.401 EAL: Restoring previous memory policy: 4 00:13:28.401 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.401 EAL: request: mp_malloc_sync 00:13:28.401 EAL: No shared files mode enabled, IPC is disabled 00:13:28.401 EAL: Heap on socket 0 was expanded by 130MB 00:13:28.401 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.401 EAL: request: mp_malloc_sync 00:13:28.401 EAL: No shared files mode enabled, IPC is disabled 00:13:28.401 EAL: Heap on socket 0 was shrunk by 130MB 00:13:28.401 EAL: Trying to obtain current memory policy. 00:13:28.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:28.401 EAL: Restoring previous memory policy: 4 00:13:28.401 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.401 EAL: request: mp_malloc_sync 00:13:28.401 EAL: No shared files mode enabled, IPC is disabled 00:13:28.401 EAL: Heap on socket 0 was expanded by 258MB 00:13:28.401 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.660 EAL: request: mp_malloc_sync 00:13:28.660 EAL: No shared files mode enabled, IPC is disabled 00:13:28.660 EAL: Heap on socket 0 was shrunk by 258MB 00:13:28.660 EAL: Trying to obtain current memory policy. 00:13:28.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:28.660 EAL: Restoring previous memory policy: 4 00:13:28.660 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.660 EAL: request: mp_malloc_sync 00:13:28.660 EAL: No shared files mode enabled, IPC is disabled 00:13:28.660 EAL: Heap on socket 0 was expanded by 514MB 00:13:28.660 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.920 EAL: request: mp_malloc_sync 00:13:28.920 EAL: No shared files mode enabled, IPC is disabled 00:13:28.920 EAL: Heap on socket 0 was shrunk by 514MB 00:13:28.920 EAL: Trying to obtain current memory policy. 00:13:28.920 EAL: Setting policy MPOL_PREFERRED for socket 0 00:13:28.920 EAL: Restoring previous memory policy: 4 00:13:28.920 EAL: Calling mem event callback 'spdk:(nil)' 00:13:28.920 EAL: request: mp_malloc_sync 00:13:28.920 EAL: No shared files mode enabled, IPC is disabled 00:13:28.920 EAL: Heap on socket 0 was expanded by 1026MB 00:13:29.178 EAL: Calling mem event callback 'spdk:(nil)' 00:13:29.438 EAL: request: mp_malloc_sync 00:13:29.438 EAL: No shared files mode enabled, IPC is disabled 00:13:29.438 EAL: Heap on socket 0 was shrunk by 1026MB 00:13:29.438 passed 00:13:29.438 00:13:29.438 Run Summary: Type Total Ran Passed Failed Inactive 00:13:29.438 suites 1 1 n/a 0 0 00:13:29.438 tests 2 2 2 0 0 00:13:29.438 asserts 497 497 497 0 n/a 00:13:29.438 00:13:29.438 Elapsed time = 1.020 seconds 00:13:29.438 EAL: Calling mem event callback 'spdk:(nil)' 00:13:29.438 EAL: request: mp_malloc_sync 00:13:29.438 EAL: No shared files mode enabled, IPC is disabled 00:13:29.438 EAL: Heap on socket 0 was shrunk by 2MB 00:13:29.438 EAL: No shared files mode enabled, IPC is disabled 00:13:29.438 EAL: No shared files mode enabled, IPC is disabled 00:13:29.438 EAL: No shared files mode enabled, IPC is disabled 00:13:29.438 00:13:29.438 real 0m1.204s 00:13:29.438 user 0m0.684s 00:13:29.438 sys 0m0.488s 00:13:29.438 11:21:54 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:29.438 11:21:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:13:29.438 ************************************ 00:13:29.438 END TEST env_vtophys 00:13:29.438 ************************************ 00:13:29.438 11:21:54 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:13:29.438 11:21:54 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:29.438 11:21:54 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:29.438 11:21:54 env -- common/autotest_common.sh@10 -- # set +x 00:13:29.438 ************************************ 00:13:29.438 START TEST env_pci 00:13:29.438 ************************************ 00:13:29.438 11:21:54 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:13:29.438 00:13:29.438 00:13:29.438 CUnit - A unit testing framework for C - Version 2.1-3 00:13:29.438 http://cunit.sourceforge.net/ 00:13:29.438 00:13:29.438 00:13:29.438 Suite: pci 00:13:29.438 Test: pci_hook ...[2024-06-10 11:21:54.433376] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3764270 has claimed it 00:13:29.438 EAL: Cannot find device (10000:00:01.0) 00:13:29.438 EAL: Failed to attach device on primary process 00:13:29.438 passed 00:13:29.438 00:13:29.438 Run Summary: Type Total Ran Passed Failed Inactive 00:13:29.438 suites 1 1 n/a 0 0 00:13:29.438 tests 1 1 1 0 0 00:13:29.438 asserts 25 25 25 0 n/a 00:13:29.438 00:13:29.438 Elapsed time = 0.045 seconds 00:13:29.438 00:13:29.438 real 0m0.064s 00:13:29.438 user 0m0.015s 00:13:29.438 sys 0m0.049s 00:13:29.438 11:21:54 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:29.438 11:21:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:13:29.438 ************************************ 00:13:29.438 END TEST env_pci 00:13:29.438 ************************************ 00:13:29.438 11:21:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:13:29.438 11:21:54 env -- env/env.sh@15 -- # uname 00:13:29.438 11:21:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:13:29.438 11:21:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:13:29.438 11:21:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:13:29.438 11:21:54 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:13:29.438 11:21:54 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:29.438 11:21:54 env -- common/autotest_common.sh@10 -- # set +x 00:13:29.697 ************************************ 00:13:29.697 START TEST env_dpdk_post_init 00:13:29.697 ************************************ 00:13:29.697 11:21:54 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:13:29.697 EAL: Detected CPU lcores: 112 00:13:29.697 EAL: Detected NUMA nodes: 2 00:13:29.697 EAL: Detected shared linkage of DPDK 00:13:29.697 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:29.697 EAL: Selected IOVA mode 'VA' 00:13:29.697 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.697 EAL: VFIO support initialized 00:13:29.697 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:29.697 EAL: Using IOMMU type 1 (Type 1) 00:13:29.697 EAL: Ignore mapping IO port bar(1) 00:13:29.697 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:13:29.697 EAL: Ignore mapping IO port bar(1) 00:13:29.697 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:13:29.957 EAL: Ignore mapping IO port bar(1) 00:13:29.957 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:13:30.894 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:13:34.208 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:13:34.208 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:13:34.467 Starting DPDK initialization... 00:13:34.467 Starting SPDK post initialization... 00:13:34.467 SPDK NVMe probe 00:13:34.467 Attaching to 0000:d8:00.0 00:13:34.467 Attached to 0000:d8:00.0 00:13:34.467 Cleaning up... 00:13:34.467 00:13:34.467 real 0m4.968s 00:13:34.467 user 0m3.586s 00:13:34.467 sys 0m0.436s 00:13:34.467 11:21:59 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:34.467 11:21:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:13:34.467 ************************************ 00:13:34.467 END TEST env_dpdk_post_init 00:13:34.467 ************************************ 00:13:34.727 11:21:59 env -- env/env.sh@26 -- # uname 00:13:34.727 11:21:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:13:34.727 11:21:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:13:34.727 11:21:59 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:34.727 11:21:59 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:34.727 11:21:59 env -- common/autotest_common.sh@10 -- # set +x 00:13:34.727 ************************************ 00:13:34.727 START TEST env_mem_callbacks 00:13:34.727 ************************************ 00:13:34.727 11:21:59 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:13:34.727 EAL: Detected CPU lcores: 112 00:13:34.727 EAL: Detected NUMA nodes: 2 00:13:34.727 EAL: Detected shared linkage of DPDK 00:13:34.727 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:34.727 EAL: Selected IOVA mode 'VA' 00:13:34.727 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.727 EAL: VFIO support initialized 00:13:34.727 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:34.727 00:13:34.727 00:13:34.727 CUnit - A unit testing framework for C - Version 2.1-3 00:13:34.727 http://cunit.sourceforge.net/ 00:13:34.727 00:13:34.727 00:13:34.727 Suite: memory 00:13:34.727 Test: test ... 00:13:34.727 register 0x200000200000 2097152 00:13:34.727 malloc 3145728 00:13:34.727 register 0x200000400000 4194304 00:13:34.727 buf 0x200000500000 len 3145728 PASSED 00:13:34.727 malloc 64 00:13:34.727 buf 0x2000004fff40 len 64 PASSED 00:13:34.727 malloc 4194304 00:13:34.727 register 0x200000800000 6291456 00:13:34.727 buf 0x200000a00000 len 4194304 PASSED 00:13:34.727 free 0x200000500000 3145728 00:13:34.727 free 0x2000004fff40 64 00:13:34.727 unregister 0x200000400000 4194304 PASSED 00:13:34.727 free 0x200000a00000 4194304 00:13:34.727 unregister 0x200000800000 6291456 PASSED 00:13:34.727 malloc 8388608 00:13:34.727 register 0x200000400000 10485760 00:13:34.727 buf 0x200000600000 len 8388608 PASSED 00:13:34.727 free 0x200000600000 8388608 00:13:34.727 unregister 0x200000400000 10485760 PASSED 00:13:34.727 passed 00:13:34.727 00:13:34.727 Run Summary: Type Total Ran Passed Failed Inactive 00:13:34.727 suites 1 1 n/a 0 0 00:13:34.727 tests 1 1 1 0 0 00:13:34.727 asserts 15 15 15 0 n/a 00:13:34.727 00:13:34.727 Elapsed time = 0.008 seconds 00:13:34.727 00:13:34.727 real 0m0.090s 00:13:34.727 user 0m0.024s 00:13:34.727 sys 0m0.065s 00:13:34.727 11:21:59 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:34.727 11:21:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:13:34.727 ************************************ 00:13:34.727 END TEST env_mem_callbacks 00:13:34.727 ************************************ 00:13:34.727 00:13:34.727 real 0m7.008s 00:13:34.727 user 0m4.642s 00:13:34.727 sys 0m1.427s 00:13:34.727 11:21:59 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:34.727 11:21:59 env -- common/autotest_common.sh@10 -- # set +x 00:13:34.727 ************************************ 00:13:34.727 END TEST env 00:13:34.727 ************************************ 00:13:34.727 11:21:59 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:13:34.727 11:21:59 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:34.727 11:21:59 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:34.727 11:21:59 -- common/autotest_common.sh@10 -- # set +x 00:13:34.986 ************************************ 00:13:34.986 START TEST rpc 00:13:34.986 ************************************ 00:13:34.986 11:21:59 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:13:34.986 * Looking for test storage... 00:13:34.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:13:34.986 11:21:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3765270 00:13:34.986 11:21:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:34.986 11:21:59 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:13:34.986 11:21:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3765270 00:13:34.986 11:21:59 rpc -- common/autotest_common.sh@830 -- # '[' -z 3765270 ']' 00:13:34.986 11:21:59 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.986 11:21:59 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:34.986 11:21:59 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.986 11:21:59 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:34.986 11:21:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.986 [2024-06-10 11:22:00.007293] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:13:34.986 [2024-06-10 11:22:00.007360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3765270 ] 00:13:34.986 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.245 [2024-06-10 11:22:00.133993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.245 [2024-06-10 11:22:00.218626] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:13:35.245 [2024-06-10 11:22:00.218669] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3765270' to capture a snapshot of events at runtime. 00:13:35.245 [2024-06-10 11:22:00.218683] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.245 [2024-06-10 11:22:00.218695] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.245 [2024-06-10 11:22:00.218704] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3765270 for offline analysis/debug. 00:13:35.245 [2024-06-10 11:22:00.218731] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.849 11:22:00 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:35.849 11:22:00 rpc -- common/autotest_common.sh@863 -- # return 0 00:13:35.849 11:22:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:13:35.849 11:22:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:13:35.849 11:22:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:13:35.849 11:22:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:13:35.849 11:22:00 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:35.849 11:22:00 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:35.849 11:22:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.849 ************************************ 00:13:35.849 START TEST rpc_integrity 00:13:35.849 ************************************ 00:13:35.849 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:13:35.849 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.849 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.109 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:36.109 11:22:00 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.109 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:36.109 11:22:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:13:36.109 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:36.109 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:36.109 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.109 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:36.109 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.109 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:13:36.109 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:36.109 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.109 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:36.109 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.109 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:36.109 { 00:13:36.109 "name": "Malloc0", 00:13:36.109 "aliases": [ 00:13:36.109 "8c1cb0fe-af93-4773-851b-6ef708fd17b7" 00:13:36.109 ], 00:13:36.109 "product_name": "Malloc disk", 00:13:36.109 "block_size": 512, 00:13:36.109 "num_blocks": 16384, 00:13:36.109 "uuid": "8c1cb0fe-af93-4773-851b-6ef708fd17b7", 00:13:36.109 "assigned_rate_limits": { 00:13:36.109 "rw_ios_per_sec": 0, 00:13:36.109 "rw_mbytes_per_sec": 0, 00:13:36.109 "r_mbytes_per_sec": 0, 00:13:36.109 "w_mbytes_per_sec": 0 00:13:36.109 }, 00:13:36.109 "claimed": false, 00:13:36.109 "zoned": false, 00:13:36.109 "supported_io_types": { 00:13:36.109 "read": true, 00:13:36.109 "write": true, 00:13:36.109 "unmap": true, 00:13:36.109 "write_zeroes": true, 00:13:36.109 "flush": true, 00:13:36.109 "reset": true, 00:13:36.109 "compare": false, 00:13:36.109 "compare_and_write": false, 00:13:36.109 "abort": true, 00:13:36.109 "nvme_admin": false, 00:13:36.109 "nvme_io": false 00:13:36.109 }, 00:13:36.109 "memory_domains": [ 00:13:36.109 { 00:13:36.109 "dma_device_id": "system", 00:13:36.109 "dma_device_type": 1 00:13:36.109 }, 00:13:36.109 { 00:13:36.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.109 "dma_device_type": 2 00:13:36.109 } 00:13:36.109 ], 00:13:36.109 "driver_specific": {} 00:13:36.109 } 00:13:36.109 ]' 00:13:36.109 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:13:36.109 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:36.109 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:13:36.109 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.109 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:36.109 [2024-06-10 11:22:01.075628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:13:36.109 [2024-06-10 11:22:01.075663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.109 [2024-06-10 11:22:01.075690] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ee3090 00:13:36.109 [2024-06-10 11:22:01.075702] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.109 [2024-06-10 11:22:01.077106] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.109 [2024-06-10 11:22:01.077131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:36.109 Passthru0 00:13:36.109 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.109 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:36.109 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.109 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:36.109 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.109 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:36.109 { 00:13:36.109 "name": "Malloc0", 00:13:36.109 "aliases": [ 00:13:36.109 "8c1cb0fe-af93-4773-851b-6ef708fd17b7" 00:13:36.109 ], 00:13:36.109 "product_name": "Malloc disk", 00:13:36.109 "block_size": 512, 00:13:36.109 "num_blocks": 16384, 00:13:36.109 "uuid": "8c1cb0fe-af93-4773-851b-6ef708fd17b7", 00:13:36.109 "assigned_rate_limits": { 00:13:36.109 "rw_ios_per_sec": 0, 00:13:36.109 "rw_mbytes_per_sec": 0, 00:13:36.109 "r_mbytes_per_sec": 0, 00:13:36.109 "w_mbytes_per_sec": 0 00:13:36.109 }, 00:13:36.109 "claimed": true, 00:13:36.109 "claim_type": "exclusive_write", 00:13:36.109 "zoned": false, 00:13:36.109 "supported_io_types": { 00:13:36.109 "read": true, 00:13:36.109 "write": true, 00:13:36.109 "unmap": true, 00:13:36.109 "write_zeroes": true, 00:13:36.109 "flush": true, 00:13:36.109 "reset": true, 00:13:36.109 "compare": false, 00:13:36.109 "compare_and_write": false, 00:13:36.109 "abort": true, 00:13:36.109 "nvme_admin": false, 00:13:36.109 "nvme_io": false 00:13:36.109 }, 00:13:36.109 "memory_domains": [ 00:13:36.109 { 00:13:36.109 "dma_device_id": "system", 00:13:36.109 "dma_device_type": 1 00:13:36.109 }, 00:13:36.109 { 00:13:36.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.109 "dma_device_type": 2 00:13:36.109 } 00:13:36.109 ], 00:13:36.109 "driver_specific": {} 00:13:36.109 }, 00:13:36.109 { 00:13:36.109 "name": "Passthru0", 00:13:36.109 "aliases": [ 00:13:36.109 "71244e96-f606-5652-a298-f836891da32b" 00:13:36.109 ], 00:13:36.109 "product_name": "passthru", 00:13:36.109 "block_size": 512, 00:13:36.109 "num_blocks": 16384, 00:13:36.109 "uuid": "71244e96-f606-5652-a298-f836891da32b", 00:13:36.109 "assigned_rate_limits": { 00:13:36.109 "rw_ios_per_sec": 0, 00:13:36.109 "rw_mbytes_per_sec": 0, 00:13:36.109 "r_mbytes_per_sec": 0, 00:13:36.109 "w_mbytes_per_sec": 0 00:13:36.109 }, 00:13:36.109 "claimed": false, 00:13:36.109 "zoned": false, 00:13:36.109 "supported_io_types": { 00:13:36.109 "read": true, 00:13:36.109 "write": true, 00:13:36.109 "unmap": true, 00:13:36.109 "write_zeroes": true, 00:13:36.109 "flush": true, 00:13:36.109 "reset": true, 00:13:36.109 "compare": false, 00:13:36.110 "compare_and_write": false, 00:13:36.110 "abort": true, 00:13:36.110 "nvme_admin": false, 00:13:36.110 "nvme_io": false 00:13:36.110 }, 00:13:36.110 "memory_domains": [ 00:13:36.110 { 00:13:36.110 "dma_device_id": "system", 00:13:36.110 "dma_device_type": 1 00:13:36.110 }, 00:13:36.110 { 00:13:36.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.110 "dma_device_type": 2 00:13:36.110 } 00:13:36.110 ], 00:13:36.110 "driver_specific": { 00:13:36.110 "passthru": { 00:13:36.110 "name": "Passthru0", 00:13:36.110 "base_bdev_name": "Malloc0" 00:13:36.110 } 00:13:36.110 } 00:13:36.110 } 00:13:36.110 ]' 00:13:36.110 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:13:36.110 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:36.110 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:36.110 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.110 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:36.110 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.110 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:36.110 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.110 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:36.110 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.110 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:36.110 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.110 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:36.110 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.110 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:36.110 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:13:36.368 11:22:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:36.368 00:13:36.368 real 0m0.279s 00:13:36.368 user 0m0.169s 00:13:36.368 sys 0m0.045s 00:13:36.368 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:36.369 11:22:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:36.369 ************************************ 00:13:36.369 END TEST rpc_integrity 00:13:36.369 ************************************ 00:13:36.369 11:22:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:13:36.369 11:22:01 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:36.369 11:22:01 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:36.369 11:22:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.369 ************************************ 00:13:36.369 START TEST rpc_plugins 00:13:36.369 ************************************ 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:13:36.369 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.369 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:13:36.369 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.369 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:13:36.369 { 00:13:36.369 "name": "Malloc1", 00:13:36.369 "aliases": [ 00:13:36.369 "b45eb4bd-bf68-477c-9921-2aee104f38e5" 00:13:36.369 ], 00:13:36.369 "product_name": "Malloc disk", 00:13:36.369 "block_size": 4096, 00:13:36.369 "num_blocks": 256, 00:13:36.369 "uuid": "b45eb4bd-bf68-477c-9921-2aee104f38e5", 00:13:36.369 "assigned_rate_limits": { 00:13:36.369 "rw_ios_per_sec": 0, 00:13:36.369 "rw_mbytes_per_sec": 0, 00:13:36.369 "r_mbytes_per_sec": 0, 00:13:36.369 "w_mbytes_per_sec": 0 00:13:36.369 }, 00:13:36.369 "claimed": false, 00:13:36.369 "zoned": false, 00:13:36.369 "supported_io_types": { 00:13:36.369 "read": true, 00:13:36.369 "write": true, 00:13:36.369 "unmap": true, 00:13:36.369 "write_zeroes": true, 00:13:36.369 "flush": true, 00:13:36.369 "reset": true, 00:13:36.369 "compare": false, 00:13:36.369 "compare_and_write": false, 00:13:36.369 "abort": true, 00:13:36.369 "nvme_admin": false, 00:13:36.369 "nvme_io": false 00:13:36.369 }, 00:13:36.369 "memory_domains": [ 00:13:36.369 { 00:13:36.369 "dma_device_id": "system", 00:13:36.369 "dma_device_type": 1 00:13:36.369 }, 00:13:36.369 { 00:13:36.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.369 "dma_device_type": 2 00:13:36.369 } 00:13:36.369 ], 00:13:36.369 "driver_specific": {} 00:13:36.369 } 00:13:36.369 ]' 00:13:36.369 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:13:36.369 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:13:36.369 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.369 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.369 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:13:36.369 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:13:36.369 11:22:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:13:36.369 00:13:36.369 real 0m0.157s 00:13:36.369 user 0m0.094s 00:13:36.369 sys 0m0.026s 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:36.369 11:22:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:36.369 ************************************ 00:13:36.369 END TEST rpc_plugins 00:13:36.369 ************************************ 00:13:36.628 11:22:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:13:36.628 11:22:01 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:36.628 11:22:01 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:36.628 11:22:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.628 ************************************ 00:13:36.628 START TEST rpc_trace_cmd_test 00:13:36.628 ************************************ 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:13:36.628 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3765270", 00:13:36.628 "tpoint_group_mask": "0x8", 00:13:36.628 "iscsi_conn": { 00:13:36.628 "mask": "0x2", 00:13:36.628 "tpoint_mask": "0x0" 00:13:36.628 }, 00:13:36.628 "scsi": { 00:13:36.628 "mask": "0x4", 00:13:36.628 "tpoint_mask": "0x0" 00:13:36.628 }, 00:13:36.628 "bdev": { 00:13:36.628 "mask": "0x8", 00:13:36.628 "tpoint_mask": "0xffffffffffffffff" 00:13:36.628 }, 00:13:36.628 "nvmf_rdma": { 00:13:36.628 "mask": "0x10", 00:13:36.628 "tpoint_mask": "0x0" 00:13:36.628 }, 00:13:36.628 "nvmf_tcp": { 00:13:36.628 "mask": "0x20", 00:13:36.628 "tpoint_mask": "0x0" 00:13:36.628 }, 00:13:36.628 "ftl": { 00:13:36.628 "mask": "0x40", 00:13:36.628 "tpoint_mask": "0x0" 00:13:36.628 }, 00:13:36.628 "blobfs": { 00:13:36.628 "mask": "0x80", 00:13:36.628 "tpoint_mask": "0x0" 00:13:36.628 }, 00:13:36.628 "dsa": { 00:13:36.628 "mask": "0x200", 00:13:36.628 "tpoint_mask": "0x0" 00:13:36.628 }, 00:13:36.628 "thread": { 00:13:36.628 "mask": "0x400", 00:13:36.628 "tpoint_mask": "0x0" 00:13:36.628 }, 00:13:36.628 "nvme_pcie": { 00:13:36.628 "mask": "0x800", 00:13:36.628 "tpoint_mask": "0x0" 00:13:36.628 }, 00:13:36.628 "iaa": { 00:13:36.628 "mask": "0x1000", 00:13:36.628 "tpoint_mask": "0x0" 00:13:36.628 }, 00:13:36.628 "nvme_tcp": { 00:13:36.628 "mask": "0x2000", 00:13:36.628 "tpoint_mask": "0x0" 00:13:36.628 }, 00:13:36.628 "bdev_nvme": { 00:13:36.628 "mask": "0x4000", 00:13:36.628 "tpoint_mask": "0x0" 00:13:36.628 }, 00:13:36.628 "sock": { 00:13:36.628 "mask": "0x8000", 00:13:36.628 "tpoint_mask": "0x0" 00:13:36.628 } 00:13:36.628 }' 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:13:36.628 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:13:36.887 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:13:36.887 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:13:36.887 11:22:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:13:36.887 00:13:36.887 real 0m0.245s 00:13:36.887 user 0m0.193s 00:13:36.887 sys 0m0.045s 00:13:36.887 11:22:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:36.887 11:22:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.887 ************************************ 00:13:36.887 END TEST rpc_trace_cmd_test 00:13:36.887 ************************************ 00:13:36.887 11:22:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:13:36.887 11:22:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:13:36.887 11:22:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:13:36.887 11:22:01 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:36.887 11:22:01 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:36.887 11:22:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.887 ************************************ 00:13:36.887 START TEST rpc_daemon_integrity 00:13:36.887 ************************************ 00:13:36.887 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:13:36.887 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:36.887 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.887 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:36.887 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.887 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:36.887 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:13:36.887 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:36.887 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:36.888 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.888 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:36.888 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.888 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:13:36.888 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:36.888 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.888 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:36.888 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.888 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:36.888 { 00:13:36.888 "name": "Malloc2", 00:13:36.888 "aliases": [ 00:13:36.888 "f04f726f-5ae9-42b5-a6fd-3123f97e323d" 00:13:36.888 ], 00:13:36.888 "product_name": "Malloc disk", 00:13:36.888 "block_size": 512, 00:13:36.888 "num_blocks": 16384, 00:13:36.888 "uuid": "f04f726f-5ae9-42b5-a6fd-3123f97e323d", 00:13:36.888 "assigned_rate_limits": { 00:13:36.888 "rw_ios_per_sec": 0, 00:13:36.888 "rw_mbytes_per_sec": 0, 00:13:36.888 "r_mbytes_per_sec": 0, 00:13:36.888 "w_mbytes_per_sec": 0 00:13:36.888 }, 00:13:36.888 "claimed": false, 00:13:36.888 "zoned": false, 00:13:36.888 "supported_io_types": { 00:13:36.888 "read": true, 00:13:36.888 "write": true, 00:13:36.888 "unmap": true, 00:13:36.888 "write_zeroes": true, 00:13:36.888 "flush": true, 00:13:36.888 "reset": true, 00:13:36.888 "compare": false, 00:13:36.888 "compare_and_write": false, 00:13:36.888 "abort": true, 00:13:36.888 "nvme_admin": false, 00:13:36.888 "nvme_io": false 00:13:36.888 }, 00:13:36.888 "memory_domains": [ 00:13:36.888 { 00:13:36.888 "dma_device_id": "system", 00:13:36.888 "dma_device_type": 1 00:13:36.888 }, 00:13:36.888 { 00:13:36.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.888 "dma_device_type": 2 00:13:36.888 } 00:13:36.888 ], 00:13:36.888 "driver_specific": {} 00:13:36.888 } 00:13:36.888 ]' 00:13:36.888 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:13:37.148 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:37.148 11:22:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:13:37.148 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.148 11:22:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:37.148 [2024-06-10 11:22:01.998221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:13:37.148 [2024-06-10 11:22:01.998256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.148 [2024-06-10 11:22:01.998273] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ee4590 00:13:37.148 [2024-06-10 11:22:01.998285] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.148 [2024-06-10 11:22:01.999515] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.148 [2024-06-10 11:22:01.999540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:37.148 Passthru0 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:37.148 { 00:13:37.148 "name": "Malloc2", 00:13:37.148 "aliases": [ 00:13:37.148 "f04f726f-5ae9-42b5-a6fd-3123f97e323d" 00:13:37.148 ], 00:13:37.148 "product_name": "Malloc disk", 00:13:37.148 "block_size": 512, 00:13:37.148 "num_blocks": 16384, 00:13:37.148 "uuid": "f04f726f-5ae9-42b5-a6fd-3123f97e323d", 00:13:37.148 "assigned_rate_limits": { 00:13:37.148 "rw_ios_per_sec": 0, 00:13:37.148 "rw_mbytes_per_sec": 0, 00:13:37.148 "r_mbytes_per_sec": 0, 00:13:37.148 "w_mbytes_per_sec": 0 00:13:37.148 }, 00:13:37.148 "claimed": true, 00:13:37.148 "claim_type": "exclusive_write", 00:13:37.148 "zoned": false, 00:13:37.148 "supported_io_types": { 00:13:37.148 "read": true, 00:13:37.148 "write": true, 00:13:37.148 "unmap": true, 00:13:37.148 "write_zeroes": true, 00:13:37.148 "flush": true, 00:13:37.148 "reset": true, 00:13:37.148 "compare": false, 00:13:37.148 "compare_and_write": false, 00:13:37.148 "abort": true, 00:13:37.148 "nvme_admin": false, 00:13:37.148 "nvme_io": false 00:13:37.148 }, 00:13:37.148 "memory_domains": [ 00:13:37.148 { 00:13:37.148 "dma_device_id": "system", 00:13:37.148 "dma_device_type": 1 00:13:37.148 }, 00:13:37.148 { 00:13:37.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.148 "dma_device_type": 2 00:13:37.148 } 00:13:37.148 ], 00:13:37.148 "driver_specific": {} 00:13:37.148 }, 00:13:37.148 { 00:13:37.148 "name": "Passthru0", 00:13:37.148 "aliases": [ 00:13:37.148 "2e7a5031-4e29-5119-9407-2a65b6c547b5" 00:13:37.148 ], 00:13:37.148 "product_name": "passthru", 00:13:37.148 "block_size": 512, 00:13:37.148 "num_blocks": 16384, 00:13:37.148 "uuid": "2e7a5031-4e29-5119-9407-2a65b6c547b5", 00:13:37.148 "assigned_rate_limits": { 00:13:37.148 "rw_ios_per_sec": 0, 00:13:37.148 "rw_mbytes_per_sec": 0, 00:13:37.148 "r_mbytes_per_sec": 0, 00:13:37.148 "w_mbytes_per_sec": 0 00:13:37.148 }, 00:13:37.148 "claimed": false, 00:13:37.148 "zoned": false, 00:13:37.148 "supported_io_types": { 00:13:37.148 "read": true, 00:13:37.148 "write": true, 00:13:37.148 "unmap": true, 00:13:37.148 "write_zeroes": true, 00:13:37.148 "flush": true, 00:13:37.148 "reset": true, 00:13:37.148 "compare": false, 00:13:37.148 "compare_and_write": false, 00:13:37.148 "abort": true, 00:13:37.148 "nvme_admin": false, 00:13:37.148 "nvme_io": false 00:13:37.148 }, 00:13:37.148 "memory_domains": [ 00:13:37.148 { 00:13:37.148 "dma_device_id": "system", 00:13:37.148 "dma_device_type": 1 00:13:37.148 }, 00:13:37.148 { 00:13:37.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.148 "dma_device_type": 2 00:13:37.148 } 00:13:37.148 ], 00:13:37.148 "driver_specific": { 00:13:37.148 "passthru": { 00:13:37.148 "name": "Passthru0", 00:13:37.148 "base_bdev_name": "Malloc2" 00:13:37.148 } 00:13:37.148 } 00:13:37.148 } 00:13:37.148 ]' 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:37.148 00:13:37.148 real 0m0.295s 00:13:37.148 user 0m0.187s 00:13:37.148 sys 0m0.048s 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:37.148 11:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:37.148 ************************************ 00:13:37.148 END TEST rpc_daemon_integrity 00:13:37.148 ************************************ 00:13:37.148 11:22:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:13:37.148 11:22:02 rpc -- rpc/rpc.sh@84 -- # killprocess 3765270 00:13:37.148 11:22:02 rpc -- common/autotest_common.sh@949 -- # '[' -z 3765270 ']' 00:13:37.148 11:22:02 rpc -- common/autotest_common.sh@953 -- # kill -0 3765270 00:13:37.148 11:22:02 rpc -- common/autotest_common.sh@954 -- # uname 00:13:37.148 11:22:02 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:37.148 11:22:02 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3765270 00:13:37.407 11:22:02 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:37.407 11:22:02 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:37.407 11:22:02 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3765270' 00:13:37.407 killing process with pid 3765270 00:13:37.407 11:22:02 rpc -- common/autotest_common.sh@968 -- # kill 3765270 00:13:37.407 11:22:02 rpc -- common/autotest_common.sh@973 -- # wait 3765270 00:13:37.666 00:13:37.666 real 0m2.737s 00:13:37.666 user 0m3.479s 00:13:37.666 sys 0m0.888s 00:13:37.666 11:22:02 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:37.666 11:22:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.666 ************************************ 00:13:37.666 END TEST rpc 00:13:37.666 ************************************ 00:13:37.666 11:22:02 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:13:37.666 11:22:02 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:37.666 11:22:02 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:37.666 11:22:02 -- common/autotest_common.sh@10 -- # set +x 00:13:37.666 ************************************ 00:13:37.666 START TEST skip_rpc 00:13:37.666 ************************************ 00:13:37.666 11:22:02 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:13:37.666 * Looking for test storage... 00:13:37.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:13:37.666 11:22:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:13:37.666 11:22:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:13:37.666 11:22:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:13:37.666 11:22:02 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:37.666 11:22:02 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:37.666 11:22:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.925 ************************************ 00:13:37.925 START TEST skip_rpc 00:13:37.925 ************************************ 00:13:37.925 11:22:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:13:37.925 11:22:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3765917 00:13:37.925 11:22:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:37.925 11:22:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:13:37.925 11:22:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:13:37.925 [2024-06-10 11:22:02.856929] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:13:37.925 [2024-06-10 11:22:02.856991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3765917 ] 00:13:37.925 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.925 [2024-06-10 11:22:02.978982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.185 [2024-06-10 11:22:03.061568] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3765917 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 3765917 ']' 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 3765917 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3765917 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3765917' 00:13:43.458 killing process with pid 3765917 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 3765917 00:13:43.458 11:22:07 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 3765917 00:13:43.458 00:13:43.458 real 0m5.398s 00:13:43.458 user 0m5.086s 00:13:43.458 sys 0m0.341s 00:13:43.458 11:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:43.458 11:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.458 ************************************ 00:13:43.458 END TEST skip_rpc 00:13:43.458 ************************************ 00:13:43.458 11:22:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:13:43.458 11:22:08 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:43.458 11:22:08 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:43.458 11:22:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.458 ************************************ 00:13:43.458 START TEST skip_rpc_with_json 00:13:43.458 ************************************ 00:13:43.458 11:22:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:13:43.458 11:22:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:13:43.458 11:22:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3766992 00:13:43.458 11:22:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:43.458 11:22:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:13:43.458 11:22:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3766992 00:13:43.458 11:22:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 3766992 ']' 00:13:43.458 11:22:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.458 11:22:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:43.458 11:22:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.458 11:22:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:43.458 11:22:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:43.458 [2024-06-10 11:22:08.335316] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:13:43.458 [2024-06-10 11:22:08.335373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3766992 ] 00:13:43.458 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.458 [2024-06-10 11:22:08.454334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.458 [2024-06-10 11:22:08.538939] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.394 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:44.394 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:13:44.394 11:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:13:44.394 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.394 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:44.394 [2024-06-10 11:22:09.235689] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:13:44.394 request: 00:13:44.394 { 00:13:44.394 "trtype": "tcp", 00:13:44.394 "method": "nvmf_get_transports", 00:13:44.394 "req_id": 1 00:13:44.394 } 00:13:44.394 Got JSON-RPC error response 00:13:44.395 response: 00:13:44.395 { 00:13:44.395 "code": -19, 00:13:44.395 "message": "No such device" 00:13:44.395 } 00:13:44.395 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:13:44.395 11:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:13:44.395 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.395 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:44.395 [2024-06-10 11:22:09.247811] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.395 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.395 11:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:13:44.395 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.395 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:44.395 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.395 11:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:13:44.395 { 00:13:44.395 "subsystems": [ 00:13:44.395 { 00:13:44.395 "subsystem": "vfio_user_target", 00:13:44.395 "config": null 00:13:44.395 }, 00:13:44.395 { 00:13:44.395 "subsystem": "keyring", 00:13:44.395 "config": [] 00:13:44.395 }, 00:13:44.395 { 00:13:44.395 "subsystem": "iobuf", 00:13:44.395 "config": [ 00:13:44.395 { 00:13:44.395 "method": "iobuf_set_options", 00:13:44.395 "params": { 00:13:44.395 "small_pool_count": 8192, 00:13:44.395 "large_pool_count": 1024, 00:13:44.395 "small_bufsize": 8192, 00:13:44.395 "large_bufsize": 135168 00:13:44.395 } 00:13:44.395 } 00:13:44.395 ] 00:13:44.395 }, 00:13:44.395 { 00:13:44.395 "subsystem": "sock", 00:13:44.395 "config": [ 00:13:44.395 { 00:13:44.395 "method": "sock_set_default_impl", 00:13:44.395 "params": { 00:13:44.395 "impl_name": "posix" 00:13:44.395 } 00:13:44.395 }, 00:13:44.395 { 00:13:44.395 "method": "sock_impl_set_options", 00:13:44.395 "params": { 00:13:44.395 "impl_name": "ssl", 00:13:44.395 "recv_buf_size": 4096, 00:13:44.395 "send_buf_size": 4096, 00:13:44.395 "enable_recv_pipe": true, 00:13:44.395 "enable_quickack": false, 00:13:44.395 "enable_placement_id": 0, 00:13:44.395 "enable_zerocopy_send_server": true, 00:13:44.395 "enable_zerocopy_send_client": false, 00:13:44.395 "zerocopy_threshold": 0, 00:13:44.395 "tls_version": 0, 00:13:44.395 "enable_ktls": false 00:13:44.395 } 00:13:44.395 }, 00:13:44.395 { 00:13:44.395 "method": "sock_impl_set_options", 00:13:44.395 "params": { 00:13:44.395 "impl_name": "posix", 00:13:44.395 "recv_buf_size": 2097152, 00:13:44.395 "send_buf_size": 2097152, 00:13:44.395 "enable_recv_pipe": true, 00:13:44.395 "enable_quickack": false, 00:13:44.395 "enable_placement_id": 0, 00:13:44.395 "enable_zerocopy_send_server": true, 00:13:44.395 "enable_zerocopy_send_client": false, 00:13:44.395 "zerocopy_threshold": 0, 00:13:44.395 "tls_version": 0, 00:13:44.395 "enable_ktls": false 00:13:44.395 } 00:13:44.395 } 00:13:44.395 ] 00:13:44.395 }, 00:13:44.395 { 00:13:44.395 "subsystem": "vmd", 00:13:44.395 "config": [] 00:13:44.395 }, 00:13:44.395 { 00:13:44.395 "subsystem": "accel", 00:13:44.395 "config": [ 00:13:44.395 { 00:13:44.395 "method": "accel_set_options", 00:13:44.395 "params": { 00:13:44.395 "small_cache_size": 128, 00:13:44.395 "large_cache_size": 16, 00:13:44.395 "task_count": 2048, 00:13:44.395 "sequence_count": 2048, 00:13:44.395 "buf_count": 2048 00:13:44.395 } 00:13:44.395 } 00:13:44.395 ] 00:13:44.395 }, 00:13:44.395 { 00:13:44.395 "subsystem": "bdev", 00:13:44.395 "config": [ 00:13:44.395 { 00:13:44.395 "method": "bdev_set_options", 00:13:44.395 "params": { 00:13:44.395 "bdev_io_pool_size": 65535, 00:13:44.395 "bdev_io_cache_size": 256, 00:13:44.395 "bdev_auto_examine": true, 00:13:44.395 "iobuf_small_cache_size": 128, 00:13:44.395 "iobuf_large_cache_size": 16 00:13:44.395 } 00:13:44.395 }, 00:13:44.395 { 00:13:44.395 "method": "bdev_raid_set_options", 00:13:44.395 "params": { 00:13:44.395 "process_window_size_kb": 1024 00:13:44.395 } 00:13:44.395 }, 00:13:44.395 { 00:13:44.395 "method": "bdev_iscsi_set_options", 00:13:44.395 "params": { 00:13:44.395 "timeout_sec": 30 00:13:44.395 } 00:13:44.395 }, 00:13:44.395 { 00:13:44.395 "method": "bdev_nvme_set_options", 00:13:44.395 "params": { 00:13:44.395 "action_on_timeout": "none", 00:13:44.395 "timeout_us": 0, 00:13:44.395 "timeout_admin_us": 0, 00:13:44.395 "keep_alive_timeout_ms": 10000, 00:13:44.395 "arbitration_burst": 0, 00:13:44.395 "low_priority_weight": 0, 00:13:44.395 "medium_priority_weight": 0, 00:13:44.395 "high_priority_weight": 0, 00:13:44.395 "nvme_adminq_poll_period_us": 10000, 00:13:44.395 "nvme_ioq_poll_period_us": 0, 00:13:44.395 "io_queue_requests": 0, 00:13:44.395 "delay_cmd_submit": true, 00:13:44.395 "transport_retry_count": 4, 00:13:44.395 "bdev_retry_count": 3, 00:13:44.395 "transport_ack_timeout": 0, 00:13:44.395 "ctrlr_loss_timeout_sec": 0, 00:13:44.395 "reconnect_delay_sec": 0, 00:13:44.395 "fast_io_fail_timeout_sec": 0, 00:13:44.395 "disable_auto_failback": false, 00:13:44.395 "generate_uuids": false, 00:13:44.395 "transport_tos": 0, 00:13:44.395 "nvme_error_stat": false, 00:13:44.395 "rdma_srq_size": 0, 00:13:44.395 "io_path_stat": false, 00:13:44.395 "allow_accel_sequence": false, 00:13:44.395 "rdma_max_cq_size": 0, 00:13:44.395 "rdma_cm_event_timeout_ms": 0, 00:13:44.395 "dhchap_digests": [ 00:13:44.395 "sha256", 00:13:44.395 "sha384", 00:13:44.395 "sha512" 00:13:44.395 ], 00:13:44.395 "dhchap_dhgroups": [ 00:13:44.395 "null", 00:13:44.395 "ffdhe2048", 00:13:44.395 "ffdhe3072", 00:13:44.395 "ffdhe4096", 00:13:44.395 "ffdhe6144", 00:13:44.395 "ffdhe8192" 00:13:44.395 ] 00:13:44.395 } 00:13:44.395 }, 00:13:44.395 { 00:13:44.395 "method": "bdev_nvme_set_hotplug", 00:13:44.395 "params": { 00:13:44.395 "period_us": 100000, 00:13:44.395 "enable": false 00:13:44.395 } 00:13:44.396 }, 00:13:44.396 { 00:13:44.396 "method": "bdev_wait_for_examine" 00:13:44.396 } 00:13:44.396 ] 00:13:44.396 }, 00:13:44.396 { 00:13:44.396 "subsystem": "scsi", 00:13:44.396 "config": null 00:13:44.396 }, 00:13:44.396 { 00:13:44.396 "subsystem": "scheduler", 00:13:44.396 "config": [ 00:13:44.396 { 00:13:44.396 "method": "framework_set_scheduler", 00:13:44.396 "params": { 00:13:44.396 "name": "static" 00:13:44.396 } 00:13:44.396 } 00:13:44.396 ] 00:13:44.396 }, 00:13:44.396 { 00:13:44.396 "subsystem": "vhost_scsi", 00:13:44.396 "config": [] 00:13:44.396 }, 00:13:44.396 { 00:13:44.396 "subsystem": "vhost_blk", 00:13:44.396 "config": [] 00:13:44.396 }, 00:13:44.396 { 00:13:44.396 "subsystem": "ublk", 00:13:44.396 "config": [] 00:13:44.396 }, 00:13:44.396 { 00:13:44.396 "subsystem": "nbd", 00:13:44.396 "config": [] 00:13:44.396 }, 00:13:44.396 { 00:13:44.396 "subsystem": "nvmf", 00:13:44.396 "config": [ 00:13:44.396 { 00:13:44.396 "method": "nvmf_set_config", 00:13:44.396 "params": { 00:13:44.396 "discovery_filter": "match_any", 00:13:44.396 "admin_cmd_passthru": { 00:13:44.396 "identify_ctrlr": false 00:13:44.396 } 00:13:44.396 } 00:13:44.396 }, 00:13:44.396 { 00:13:44.396 "method": "nvmf_set_max_subsystems", 00:13:44.396 "params": { 00:13:44.396 "max_subsystems": 1024 00:13:44.396 } 00:13:44.396 }, 00:13:44.396 { 00:13:44.396 "method": "nvmf_set_crdt", 00:13:44.396 "params": { 00:13:44.396 "crdt1": 0, 00:13:44.396 "crdt2": 0, 00:13:44.396 "crdt3": 0 00:13:44.396 } 00:13:44.396 }, 00:13:44.396 { 00:13:44.396 "method": "nvmf_create_transport", 00:13:44.396 "params": { 00:13:44.396 "trtype": "TCP", 00:13:44.396 "max_queue_depth": 128, 00:13:44.396 "max_io_qpairs_per_ctrlr": 127, 00:13:44.396 "in_capsule_data_size": 4096, 00:13:44.396 "max_io_size": 131072, 00:13:44.396 "io_unit_size": 131072, 00:13:44.396 "max_aq_depth": 128, 00:13:44.396 "num_shared_buffers": 511, 00:13:44.396 "buf_cache_size": 4294967295, 00:13:44.396 "dif_insert_or_strip": false, 00:13:44.396 "zcopy": false, 00:13:44.396 "c2h_success": true, 00:13:44.396 "sock_priority": 0, 00:13:44.396 "abort_timeout_sec": 1, 00:13:44.396 "ack_timeout": 0, 00:13:44.396 "data_wr_pool_size": 0 00:13:44.396 } 00:13:44.396 } 00:13:44.396 ] 00:13:44.396 }, 00:13:44.396 { 00:13:44.396 "subsystem": "iscsi", 00:13:44.396 "config": [ 00:13:44.396 { 00:13:44.396 "method": "iscsi_set_options", 00:13:44.396 "params": { 00:13:44.396 "node_base": "iqn.2016-06.io.spdk", 00:13:44.396 "max_sessions": 128, 00:13:44.396 "max_connections_per_session": 2, 00:13:44.396 "max_queue_depth": 64, 00:13:44.396 "default_time2wait": 2, 00:13:44.396 "default_time2retain": 20, 00:13:44.396 "first_burst_length": 8192, 00:13:44.396 "immediate_data": true, 00:13:44.396 "allow_duplicated_isid": false, 00:13:44.396 "error_recovery_level": 0, 00:13:44.396 "nop_timeout": 60, 00:13:44.396 "nop_in_interval": 30, 00:13:44.396 "disable_chap": false, 00:13:44.396 "require_chap": false, 00:13:44.396 "mutual_chap": false, 00:13:44.396 "chap_group": 0, 00:13:44.396 "max_large_datain_per_connection": 64, 00:13:44.396 "max_r2t_per_connection": 4, 00:13:44.396 "pdu_pool_size": 36864, 00:13:44.396 "immediate_data_pool_size": 16384, 00:13:44.396 "data_out_pool_size": 2048 00:13:44.396 } 00:13:44.396 } 00:13:44.396 ] 00:13:44.396 } 00:13:44.396 ] 00:13:44.396 } 00:13:44.396 11:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:44.396 11:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3766992 00:13:44.396 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 3766992 ']' 00:13:44.396 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 3766992 00:13:44.396 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:13:44.396 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:44.396 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3766992 00:13:44.396 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:44.396 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:44.396 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3766992' 00:13:44.396 killing process with pid 3766992 00:13:44.396 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 3766992 00:13:44.396 11:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 3766992 00:13:44.972 11:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3767273 00:13:44.972 11:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:13:44.972 11:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:13:50.246 11:22:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3767273 00:13:50.246 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 3767273 ']' 00:13:50.246 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 3767273 00:13:50.246 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:13:50.246 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:50.246 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3767273 00:13:50.246 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:50.246 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:50.246 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3767273' 00:13:50.246 killing process with pid 3767273 00:13:50.246 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 3767273 00:13:50.246 11:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 3767273 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:13:50.246 00:13:50.246 real 0m6.935s 00:13:50.246 user 0m6.706s 00:13:50.246 sys 0m0.796s 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:50.246 ************************************ 00:13:50.246 END TEST skip_rpc_with_json 00:13:50.246 ************************************ 00:13:50.246 11:22:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:13:50.246 11:22:15 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:50.246 11:22:15 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:50.246 11:22:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.246 ************************************ 00:13:50.246 START TEST skip_rpc_with_delay 00:13:50.246 ************************************ 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:13:50.246 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:50.506 [2024-06-10 11:22:15.359920] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:13:50.506 [2024-06-10 11:22:15.360007] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:13:50.506 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:13:50.506 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:50.506 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:50.506 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:50.506 00:13:50.506 real 0m0.080s 00:13:50.506 user 0m0.049s 00:13:50.506 sys 0m0.030s 00:13:50.506 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:50.506 11:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:13:50.506 ************************************ 00:13:50.506 END TEST skip_rpc_with_delay 00:13:50.506 ************************************ 00:13:50.506 11:22:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:13:50.506 11:22:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:13:50.506 11:22:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:13:50.506 11:22:15 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:50.506 11:22:15 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:50.506 11:22:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.506 ************************************ 00:13:50.506 START TEST exit_on_failed_rpc_init 00:13:50.506 ************************************ 00:13:50.506 11:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:13:50.506 11:22:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:13:50.506 11:22:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3768229 00:13:50.506 11:22:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3768229 00:13:50.506 11:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 3768229 ']' 00:13:50.506 11:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.506 11:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:50.506 11:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.506 11:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:50.506 11:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:13:50.506 [2024-06-10 11:22:15.502110] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:13:50.506 [2024-06-10 11:22:15.502155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768229 ] 00:13:50.506 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.506 [2024-06-10 11:22:15.609616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.765 [2024-06-10 11:22:15.696656] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:13:51.333 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:13:51.592 [2024-06-10 11:22:16.484902] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:13:51.592 [2024-06-10 11:22:16.484968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768402 ] 00:13:51.592 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.592 [2024-06-10 11:22:16.594365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.592 [2024-06-10 11:22:16.675138] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.592 [2024-06-10 11:22:16.675227] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:13:51.592 [2024-06-10 11:22:16.675249] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:13:51.592 [2024-06-10 11:22:16.675266] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:51.851 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3768229 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 3768229 ']' 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 3768229 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3768229 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3768229' 00:13:51.852 killing process with pid 3768229 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 3768229 00:13:51.852 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 3768229 00:13:52.110 00:13:52.110 real 0m1.674s 00:13:52.110 user 0m1.939s 00:13:52.110 sys 0m0.554s 00:13:52.110 11:22:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:52.110 11:22:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:13:52.110 ************************************ 00:13:52.110 END TEST exit_on_failed_rpc_init 00:13:52.110 ************************************ 00:13:52.110 11:22:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:13:52.110 00:13:52.110 real 0m14.534s 00:13:52.110 user 0m13.933s 00:13:52.110 sys 0m2.054s 00:13:52.110 11:22:17 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:52.110 11:22:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.110 ************************************ 00:13:52.110 END TEST skip_rpc 00:13:52.110 ************************************ 00:13:52.370 11:22:17 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:13:52.370 11:22:17 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:52.370 11:22:17 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:52.370 11:22:17 -- common/autotest_common.sh@10 -- # set +x 00:13:52.370 ************************************ 00:13:52.370 START TEST rpc_client 00:13:52.370 ************************************ 00:13:52.370 11:22:17 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:13:52.370 * Looking for test storage... 00:13:52.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:13:52.370 11:22:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:13:52.370 OK 00:13:52.370 11:22:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:13:52.370 00:13:52.370 real 0m0.138s 00:13:52.370 user 0m0.055s 00:13:52.370 sys 0m0.094s 00:13:52.370 11:22:17 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:52.370 11:22:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:13:52.370 ************************************ 00:13:52.370 END TEST rpc_client 00:13:52.370 ************************************ 00:13:52.370 11:22:17 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:13:52.370 11:22:17 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:13:52.370 11:22:17 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:52.370 11:22:17 -- common/autotest_common.sh@10 -- # set +x 00:13:52.630 ************************************ 00:13:52.630 START TEST json_config 00:13:52.630 ************************************ 00:13:52.630 11:22:17 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:13:52.630 11:22:17 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.630 11:22:17 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.630 11:22:17 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.630 11:22:17 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.630 11:22:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.630 11:22:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.630 11:22:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.630 11:22:17 json_config -- paths/export.sh@5 -- # export PATH 00:13:52.630 11:22:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@47 -- # : 0 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:52.630 11:22:17 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:52.630 11:22:17 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:13:52.630 11:22:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:13:52.630 11:22:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:13:52.630 11:22:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:13:52.630 11:22:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:13:52.630 11:22:17 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:13:52.630 11:22:17 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:13:52.630 11:22:17 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:13:52.630 11:22:17 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:13:52.630 11:22:17 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:13:52.630 11:22:17 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:13:52.631 11:22:17 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:13:52.631 11:22:17 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:13:52.631 11:22:17 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:13:52.631 11:22:17 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:13:52.631 11:22:17 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:13:52.631 INFO: JSON configuration test init 00:13:52.631 11:22:17 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:13:52.631 11:22:17 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:13:52.631 11:22:17 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:52.631 11:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:52.631 11:22:17 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:13:52.631 11:22:17 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:52.631 11:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:52.631 11:22:17 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:13:52.631 11:22:17 json_config -- json_config/common.sh@9 -- # local app=target 00:13:52.631 11:22:17 json_config -- json_config/common.sh@10 -- # shift 00:13:52.631 11:22:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:52.631 11:22:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:52.631 11:22:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:13:52.631 11:22:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:52.631 11:22:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:52.631 11:22:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3768772 00:13:52.631 11:22:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:52.631 Waiting for target to run... 00:13:52.631 11:22:17 json_config -- json_config/common.sh@25 -- # waitforlisten 3768772 /var/tmp/spdk_tgt.sock 00:13:52.631 11:22:17 json_config -- common/autotest_common.sh@830 -- # '[' -z 3768772 ']' 00:13:52.631 11:22:17 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:52.631 11:22:17 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:52.631 11:22:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:13:52.631 11:22:17 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:52.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:52.631 11:22:17 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:52.631 11:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:52.631 [2024-06-10 11:22:17.669328] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:13:52.631 [2024-06-10 11:22:17.669390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768772 ] 00:13:52.631 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.200 [2024-06-10 11:22:18.156827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.200 [2024-06-10 11:22:18.249833] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.768 11:22:18 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:53.768 11:22:18 json_config -- common/autotest_common.sh@863 -- # return 0 00:13:53.768 11:22:18 json_config -- json_config/common.sh@26 -- # echo '' 00:13:53.768 00:13:53.768 11:22:18 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:13:53.768 11:22:18 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:13:53.768 11:22:18 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:53.768 11:22:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:53.768 11:22:18 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:13:53.768 11:22:18 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:13:53.768 11:22:18 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:53.768 11:22:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:53.768 11:22:18 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:13:53.768 11:22:18 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:13:53.768 11:22:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:13:57.059 11:22:21 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:13:57.059 11:22:21 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:13:57.059 11:22:21 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:57.059 11:22:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:57.059 11:22:21 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:13:57.059 11:22:21 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:13:57.059 11:22:21 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:13:57.059 11:22:21 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:13:57.059 11:22:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:13:57.059 11:22:21 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@48 -- # local get_types 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:13:57.059 11:22:22 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:57.059 11:22:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@55 -- # return 0 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:13:57.059 11:22:22 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:57.059 11:22:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:13:57.059 11:22:22 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:13:57.059 11:22:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:13:57.316 MallocForNvmf0 00:13:57.316 11:22:22 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:13:57.316 11:22:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:13:57.574 MallocForNvmf1 00:13:57.574 11:22:22 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:13:57.574 11:22:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:13:57.832 [2024-06-10 11:22:22.725402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.832 11:22:22 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:57.832 11:22:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:58.091 11:22:22 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:13:58.091 11:22:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:13:58.350 11:22:23 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:13:58.350 11:22:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:13:58.350 11:22:23 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:13:58.350 11:22:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:13:58.609 [2024-06-10 11:22:23.640380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:13:58.609 11:22:23 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:13:58.609 11:22:23 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:58.609 11:22:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:58.609 11:22:23 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:13:58.609 11:22:23 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:58.609 11:22:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:58.868 11:22:23 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:13:58.868 11:22:23 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:13:58.868 11:22:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:13:58.868 MallocBdevForConfigChangeCheck 00:13:59.126 11:22:23 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:13:59.126 11:22:23 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:59.126 11:22:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:59.126 11:22:24 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:13:59.126 11:22:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:13:59.385 11:22:24 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:13:59.385 INFO: shutting down applications... 00:13:59.385 11:22:24 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:13:59.385 11:22:24 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:13:59.385 11:22:24 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:13:59.385 11:22:24 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:14:01.919 Calling clear_iscsi_subsystem 00:14:01.919 Calling clear_nvmf_subsystem 00:14:01.919 Calling clear_nbd_subsystem 00:14:01.919 Calling clear_ublk_subsystem 00:14:01.919 Calling clear_vhost_blk_subsystem 00:14:01.919 Calling clear_vhost_scsi_subsystem 00:14:01.919 Calling clear_bdev_subsystem 00:14:01.919 11:22:26 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:14:01.919 11:22:26 json_config -- json_config/json_config.sh@343 -- # count=100 00:14:01.919 11:22:26 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:14:01.919 11:22:26 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:01.919 11:22:26 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:14:01.919 11:22:26 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:14:01.919 11:22:26 json_config -- json_config/json_config.sh@345 -- # break 00:14:01.919 11:22:26 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:14:01.919 11:22:26 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:14:01.919 11:22:26 json_config -- json_config/common.sh@31 -- # local app=target 00:14:01.919 11:22:26 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:14:01.919 11:22:26 json_config -- json_config/common.sh@35 -- # [[ -n 3768772 ]] 00:14:01.919 11:22:26 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3768772 00:14:01.919 11:22:26 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:14:01.919 11:22:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:01.919 11:22:26 json_config -- json_config/common.sh@41 -- # kill -0 3768772 00:14:01.919 11:22:26 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:14:02.178 11:22:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:14:02.178 11:22:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:02.178 11:22:27 json_config -- json_config/common.sh@41 -- # kill -0 3768772 00:14:02.178 11:22:27 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:14:02.178 11:22:27 json_config -- json_config/common.sh@43 -- # break 00:14:02.178 11:22:27 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:14:02.178 11:22:27 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:14:02.178 SPDK target shutdown done 00:14:02.178 11:22:27 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:14:02.178 INFO: relaunching applications... 00:14:02.178 11:22:27 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:02.178 11:22:27 json_config -- json_config/common.sh@9 -- # local app=target 00:14:02.178 11:22:27 json_config -- json_config/common.sh@10 -- # shift 00:14:02.178 11:22:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:02.178 11:22:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:02.178 11:22:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:14:02.178 11:22:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:02.178 11:22:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:02.178 11:22:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3770501 00:14:02.178 11:22:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:02.178 Waiting for target to run... 00:14:02.178 11:22:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:02.178 11:22:27 json_config -- json_config/common.sh@25 -- # waitforlisten 3770501 /var/tmp/spdk_tgt.sock 00:14:02.178 11:22:27 json_config -- common/autotest_common.sh@830 -- # '[' -z 3770501 ']' 00:14:02.178 11:22:27 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:02.178 11:22:27 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:02.178 11:22:27 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:02.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:02.178 11:22:27 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:02.178 11:22:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:02.436 [2024-06-10 11:22:27.340628] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:02.436 [2024-06-10 11:22:27.340702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770501 ] 00:14:02.436 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.001 [2024-06-10 11:22:27.828651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.001 [2024-06-10 11:22:27.921280] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.511 [2024-06-10 11:22:30.986753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.511 [2024-06-10 11:22:31.019145] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:14:06.770 11:22:31 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:06.770 11:22:31 json_config -- common/autotest_common.sh@863 -- # return 0 00:14:06.770 11:22:31 json_config -- json_config/common.sh@26 -- # echo '' 00:14:06.770 00:14:06.770 11:22:31 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:14:06.770 11:22:31 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:14:06.770 INFO: Checking if target configuration is the same... 00:14:06.770 11:22:31 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:06.770 11:22:31 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:14:06.770 11:22:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:06.770 + '[' 2 -ne 2 ']' 00:14:06.770 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:14:06.770 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:14:06.770 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:06.770 +++ basename /dev/fd/62 00:14:06.770 ++ mktemp /tmp/62.XXX 00:14:06.770 + tmp_file_1=/tmp/62.aa4 00:14:06.770 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:06.770 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:14:06.770 + tmp_file_2=/tmp/spdk_tgt_config.json.B7h 00:14:06.770 + ret=0 00:14:06.770 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:14:07.030 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:14:07.030 + diff -u /tmp/62.aa4 /tmp/spdk_tgt_config.json.B7h 00:14:07.030 + echo 'INFO: JSON config files are the same' 00:14:07.030 INFO: JSON config files are the same 00:14:07.030 + rm /tmp/62.aa4 /tmp/spdk_tgt_config.json.B7h 00:14:07.030 + exit 0 00:14:07.030 11:22:32 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:14:07.030 11:22:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:14:07.030 INFO: changing configuration and checking if this can be detected... 00:14:07.030 11:22:32 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:14:07.030 11:22:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:14:07.288 11:22:32 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:07.288 11:22:32 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:14:07.288 11:22:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:14:07.288 + '[' 2 -ne 2 ']' 00:14:07.288 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:14:07.288 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:14:07.288 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:07.288 +++ basename /dev/fd/62 00:14:07.288 ++ mktemp /tmp/62.XXX 00:14:07.288 + tmp_file_1=/tmp/62.6wW 00:14:07.288 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:07.288 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:14:07.288 + tmp_file_2=/tmp/spdk_tgt_config.json.MNL 00:14:07.288 + ret=0 00:14:07.288 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:14:07.856 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:14:07.856 + diff -u /tmp/62.6wW /tmp/spdk_tgt_config.json.MNL 00:14:07.856 + ret=1 00:14:07.856 + echo '=== Start of file: /tmp/62.6wW ===' 00:14:07.856 + cat /tmp/62.6wW 00:14:07.856 + echo '=== End of file: /tmp/62.6wW ===' 00:14:07.856 + echo '' 00:14:07.856 + echo '=== Start of file: /tmp/spdk_tgt_config.json.MNL ===' 00:14:07.856 + cat /tmp/spdk_tgt_config.json.MNL 00:14:07.856 + echo '=== End of file: /tmp/spdk_tgt_config.json.MNL ===' 00:14:07.856 + echo '' 00:14:07.856 + rm /tmp/62.6wW /tmp/spdk_tgt_config.json.MNL 00:14:07.856 + exit 1 00:14:07.856 11:22:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:14:07.856 INFO: configuration change detected. 00:14:07.856 11:22:32 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:14:07.856 11:22:32 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:14:07.856 11:22:32 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:07.856 11:22:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:07.856 11:22:32 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:14:07.857 11:22:32 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:14:07.857 11:22:32 json_config -- json_config/json_config.sh@317 -- # [[ -n 3770501 ]] 00:14:07.857 11:22:32 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:14:07.857 11:22:32 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:07.857 11:22:32 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:14:07.857 11:22:32 json_config -- json_config/json_config.sh@193 -- # uname -s 00:14:07.857 11:22:32 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:14:07.857 11:22:32 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:14:07.857 11:22:32 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:14:07.857 11:22:32 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:07.857 11:22:32 json_config -- json_config/json_config.sh@323 -- # killprocess 3770501 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@949 -- # '[' -z 3770501 ']' 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@953 -- # kill -0 3770501 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@954 -- # uname 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3770501 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3770501' 00:14:07.857 killing process with pid 3770501 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@968 -- # kill 3770501 00:14:07.857 11:22:32 json_config -- common/autotest_common.sh@973 -- # wait 3770501 00:14:10.392 11:22:35 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:14:10.392 11:22:35 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:14:10.392 11:22:35 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:10.392 11:22:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:10.392 11:22:35 json_config -- json_config/json_config.sh@328 -- # return 0 00:14:10.392 11:22:35 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:14:10.392 INFO: Success 00:14:10.392 00:14:10.392 real 0m17.572s 00:14:10.392 user 0m18.815s 00:14:10.392 sys 0m2.687s 00:14:10.392 11:22:35 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:10.392 11:22:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:14:10.392 ************************************ 00:14:10.392 END TEST json_config 00:14:10.392 ************************************ 00:14:10.392 11:22:35 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:14:10.392 11:22:35 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:10.392 11:22:35 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:10.392 11:22:35 -- common/autotest_common.sh@10 -- # set +x 00:14:10.392 ************************************ 00:14:10.392 START TEST json_config_extra_key 00:14:10.392 ************************************ 00:14:10.392 11:22:35 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:14:10.392 11:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.392 11:22:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:10.393 11:22:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.393 11:22:35 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.393 11:22:35 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.393 11:22:35 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.393 11:22:35 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.393 11:22:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.393 11:22:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.393 11:22:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.393 11:22:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:14:10.393 11:22:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.393 11:22:35 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:14:10.393 11:22:35 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.393 11:22:35 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.393 11:22:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.393 11:22:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.393 11:22:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.393 11:22:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.393 11:22:35 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.393 11:22:35 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.393 11:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:14:10.393 11:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:14:10.393 11:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:14:10.393 11:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:14:10.393 11:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:14:10.393 11:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:14:10.393 11:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:14:10.393 11:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:14:10.393 11:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:14:10.393 11:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:14:10.393 11:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:14:10.393 INFO: launching applications... 00:14:10.393 11:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:14:10.393 11:22:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:14:10.393 11:22:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:14:10.393 11:22:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:10.393 11:22:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:10.393 11:22:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:14:10.393 11:22:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:10.393 11:22:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:10.393 11:22:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3771958 00:14:10.393 11:22:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:10.393 Waiting for target to run... 00:14:10.393 11:22:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3771958 /var/tmp/spdk_tgt.sock 00:14:10.393 11:22:35 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 3771958 ']' 00:14:10.393 11:22:35 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:10.393 11:22:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:14:10.393 11:22:35 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:10.393 11:22:35 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:10.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:10.393 11:22:35 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:10.393 11:22:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:14:10.393 [2024-06-10 11:22:35.297892] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:10.393 [2024-06-10 11:22:35.297957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3771958 ] 00:14:10.393 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.653 [2024-06-10 11:22:35.652865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.653 [2024-06-10 11:22:35.726722] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.223 11:22:36 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:11.223 11:22:36 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:14:11.223 11:22:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:14:11.223 00:14:11.223 11:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:14:11.223 INFO: shutting down applications... 00:14:11.223 11:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:14:11.223 11:22:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:14:11.223 11:22:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:14:11.223 11:22:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3771958 ]] 00:14:11.223 11:22:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3771958 00:14:11.223 11:22:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:14:11.223 11:22:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:11.223 11:22:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3771958 00:14:11.223 11:22:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:14:11.804 11:22:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:14:11.804 11:22:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:14:11.804 11:22:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3771958 00:14:11.804 11:22:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:14:11.804 11:22:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:14:11.804 11:22:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:14:11.804 11:22:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:14:11.804 SPDK target shutdown done 00:14:11.804 11:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:14:11.804 Success 00:14:11.804 00:14:11.804 real 0m1.510s 00:14:11.804 user 0m1.238s 00:14:11.804 sys 0m0.486s 00:14:11.804 11:22:36 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:11.804 11:22:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:14:11.804 ************************************ 00:14:11.804 END TEST json_config_extra_key 00:14:11.804 ************************************ 00:14:11.804 11:22:36 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:14:11.804 11:22:36 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:11.804 11:22:36 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:11.804 11:22:36 -- common/autotest_common.sh@10 -- # set +x 00:14:11.804 ************************************ 00:14:11.804 START TEST alias_rpc 00:14:11.804 ************************************ 00:14:11.804 11:22:36 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:14:11.804 * Looking for test storage... 00:14:11.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:14:11.804 11:22:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:14:11.804 11:22:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3772267 00:14:11.804 11:22:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3772267 00:14:11.804 11:22:36 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 3772267 ']' 00:14:11.804 11:22:36 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.804 11:22:36 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:11.804 11:22:36 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.805 11:22:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:14:11.805 11:22:36 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:11.805 11:22:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.805 [2024-06-10 11:22:36.882411] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:11.805 [2024-06-10 11:22:36.882474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772267 ] 00:14:12.063 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.063 [2024-06-10 11:22:37.005723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.063 [2024-06-10 11:22:37.093445] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.012 11:22:37 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:13.012 11:22:37 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:14:13.012 11:22:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:14:13.012 11:22:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3772267 00:14:13.012 11:22:38 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 3772267 ']' 00:14:13.012 11:22:38 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 3772267 00:14:13.012 11:22:38 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:14:13.012 11:22:38 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:13.012 11:22:38 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3772267 00:14:13.012 11:22:38 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:13.012 11:22:38 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:13.012 11:22:38 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3772267' 00:14:13.012 killing process with pid 3772267 00:14:13.012 11:22:38 alias_rpc -- common/autotest_common.sh@968 -- # kill 3772267 00:14:13.012 11:22:38 alias_rpc -- common/autotest_common.sh@973 -- # wait 3772267 00:14:13.581 00:14:13.581 real 0m1.695s 00:14:13.581 user 0m1.852s 00:14:13.581 sys 0m0.540s 00:14:13.581 11:22:38 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:13.581 11:22:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.581 ************************************ 00:14:13.581 END TEST alias_rpc 00:14:13.581 ************************************ 00:14:13.581 11:22:38 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:14:13.581 11:22:38 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:14:13.581 11:22:38 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:13.581 11:22:38 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:13.581 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:14:13.581 ************************************ 00:14:13.581 START TEST spdkcli_tcp 00:14:13.581 ************************************ 00:14:13.581 11:22:38 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:14:13.581 * Looking for test storage... 00:14:13.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:14:13.581 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:14:13.581 11:22:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:14:13.581 11:22:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:14:13.581 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:14:13.581 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:14:13.581 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:13.581 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:14:13.581 11:22:38 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:13.581 11:22:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:13.581 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3772656 00:14:13.581 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3772656 00:14:13.581 11:22:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:14:13.581 11:22:38 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 3772656 ']' 00:14:13.581 11:22:38 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.581 11:22:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:13.581 11:22:38 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.581 11:22:38 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:13.582 11:22:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:13.582 [2024-06-10 11:22:38.662066] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:13.582 [2024-06-10 11:22:38.662139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772656 ] 00:14:13.841 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.841 [2024-06-10 11:22:38.783424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:13.841 [2024-06-10 11:22:38.869524] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.841 [2024-06-10 11:22:38.869529] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.780 11:22:39 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:14.780 11:22:39 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:14:14.780 11:22:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3772857 00:14:14.780 11:22:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:14:14.780 11:22:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:14:14.780 [ 00:14:14.780 "bdev_malloc_delete", 00:14:14.780 "bdev_malloc_create", 00:14:14.780 "bdev_null_resize", 00:14:14.780 "bdev_null_delete", 00:14:14.780 "bdev_null_create", 00:14:14.780 "bdev_nvme_cuse_unregister", 00:14:14.780 "bdev_nvme_cuse_register", 00:14:14.780 "bdev_opal_new_user", 00:14:14.780 "bdev_opal_set_lock_state", 00:14:14.780 "bdev_opal_delete", 00:14:14.780 "bdev_opal_get_info", 00:14:14.780 "bdev_opal_create", 00:14:14.780 "bdev_nvme_opal_revert", 00:14:14.780 "bdev_nvme_opal_init", 00:14:14.780 "bdev_nvme_send_cmd", 00:14:14.780 "bdev_nvme_get_path_iostat", 00:14:14.780 "bdev_nvme_get_mdns_discovery_info", 00:14:14.780 "bdev_nvme_stop_mdns_discovery", 00:14:14.780 "bdev_nvme_start_mdns_discovery", 00:14:14.780 "bdev_nvme_set_multipath_policy", 00:14:14.780 "bdev_nvme_set_preferred_path", 00:14:14.780 "bdev_nvme_get_io_paths", 00:14:14.780 "bdev_nvme_remove_error_injection", 00:14:14.780 "bdev_nvme_add_error_injection", 00:14:14.780 "bdev_nvme_get_discovery_info", 00:14:14.780 "bdev_nvme_stop_discovery", 00:14:14.780 "bdev_nvme_start_discovery", 00:14:14.780 "bdev_nvme_get_controller_health_info", 00:14:14.780 "bdev_nvme_disable_controller", 00:14:14.780 "bdev_nvme_enable_controller", 00:14:14.780 "bdev_nvme_reset_controller", 00:14:14.780 "bdev_nvme_get_transport_statistics", 00:14:14.780 "bdev_nvme_apply_firmware", 00:14:14.780 "bdev_nvme_detach_controller", 00:14:14.780 "bdev_nvme_get_controllers", 00:14:14.780 "bdev_nvme_attach_controller", 00:14:14.780 "bdev_nvme_set_hotplug", 00:14:14.780 "bdev_nvme_set_options", 00:14:14.780 "bdev_passthru_delete", 00:14:14.780 "bdev_passthru_create", 00:14:14.780 "bdev_lvol_set_parent_bdev", 00:14:14.780 "bdev_lvol_set_parent", 00:14:14.780 "bdev_lvol_check_shallow_copy", 00:14:14.780 "bdev_lvol_start_shallow_copy", 00:14:14.780 "bdev_lvol_grow_lvstore", 00:14:14.780 "bdev_lvol_get_lvols", 00:14:14.780 "bdev_lvol_get_lvstores", 00:14:14.780 "bdev_lvol_delete", 00:14:14.780 "bdev_lvol_set_read_only", 00:14:14.780 "bdev_lvol_resize", 00:14:14.780 "bdev_lvol_decouple_parent", 00:14:14.780 "bdev_lvol_inflate", 00:14:14.780 "bdev_lvol_rename", 00:14:14.780 "bdev_lvol_clone_bdev", 00:14:14.780 "bdev_lvol_clone", 00:14:14.780 "bdev_lvol_snapshot", 00:14:14.780 "bdev_lvol_create", 00:14:14.780 "bdev_lvol_delete_lvstore", 00:14:14.780 "bdev_lvol_rename_lvstore", 00:14:14.780 "bdev_lvol_create_lvstore", 00:14:14.780 "bdev_raid_set_options", 00:14:14.780 "bdev_raid_remove_base_bdev", 00:14:14.780 "bdev_raid_add_base_bdev", 00:14:14.780 "bdev_raid_delete", 00:14:14.780 "bdev_raid_create", 00:14:14.780 "bdev_raid_get_bdevs", 00:14:14.780 "bdev_error_inject_error", 00:14:14.780 "bdev_error_delete", 00:14:14.780 "bdev_error_create", 00:14:14.780 "bdev_split_delete", 00:14:14.780 "bdev_split_create", 00:14:14.780 "bdev_delay_delete", 00:14:14.780 "bdev_delay_create", 00:14:14.780 "bdev_delay_update_latency", 00:14:14.780 "bdev_zone_block_delete", 00:14:14.780 "bdev_zone_block_create", 00:14:14.780 "blobfs_create", 00:14:14.780 "blobfs_detect", 00:14:14.780 "blobfs_set_cache_size", 00:14:14.780 "bdev_aio_delete", 00:14:14.780 "bdev_aio_rescan", 00:14:14.780 "bdev_aio_create", 00:14:14.780 "bdev_ftl_set_property", 00:14:14.780 "bdev_ftl_get_properties", 00:14:14.780 "bdev_ftl_get_stats", 00:14:14.780 "bdev_ftl_unmap", 00:14:14.780 "bdev_ftl_unload", 00:14:14.780 "bdev_ftl_delete", 00:14:14.780 "bdev_ftl_load", 00:14:14.780 "bdev_ftl_create", 00:14:14.780 "bdev_virtio_attach_controller", 00:14:14.780 "bdev_virtio_scsi_get_devices", 00:14:14.780 "bdev_virtio_detach_controller", 00:14:14.780 "bdev_virtio_blk_set_hotplug", 00:14:14.780 "bdev_iscsi_delete", 00:14:14.780 "bdev_iscsi_create", 00:14:14.780 "bdev_iscsi_set_options", 00:14:14.780 "accel_error_inject_error", 00:14:14.780 "ioat_scan_accel_module", 00:14:14.780 "dsa_scan_accel_module", 00:14:14.780 "iaa_scan_accel_module", 00:14:14.781 "vfu_virtio_create_scsi_endpoint", 00:14:14.781 "vfu_virtio_scsi_remove_target", 00:14:14.781 "vfu_virtio_scsi_add_target", 00:14:14.781 "vfu_virtio_create_blk_endpoint", 00:14:14.781 "vfu_virtio_delete_endpoint", 00:14:14.781 "keyring_file_remove_key", 00:14:14.781 "keyring_file_add_key", 00:14:14.781 "keyring_linux_set_options", 00:14:14.781 "iscsi_get_histogram", 00:14:14.781 "iscsi_enable_histogram", 00:14:14.781 "iscsi_set_options", 00:14:14.781 "iscsi_get_auth_groups", 00:14:14.781 "iscsi_auth_group_remove_secret", 00:14:14.781 "iscsi_auth_group_add_secret", 00:14:14.781 "iscsi_delete_auth_group", 00:14:14.781 "iscsi_create_auth_group", 00:14:14.781 "iscsi_set_discovery_auth", 00:14:14.781 "iscsi_get_options", 00:14:14.781 "iscsi_target_node_request_logout", 00:14:14.781 "iscsi_target_node_set_redirect", 00:14:14.781 "iscsi_target_node_set_auth", 00:14:14.781 "iscsi_target_node_add_lun", 00:14:14.781 "iscsi_get_stats", 00:14:14.781 "iscsi_get_connections", 00:14:14.781 "iscsi_portal_group_set_auth", 00:14:14.781 "iscsi_start_portal_group", 00:14:14.781 "iscsi_delete_portal_group", 00:14:14.781 "iscsi_create_portal_group", 00:14:14.781 "iscsi_get_portal_groups", 00:14:14.781 "iscsi_delete_target_node", 00:14:14.781 "iscsi_target_node_remove_pg_ig_maps", 00:14:14.781 "iscsi_target_node_add_pg_ig_maps", 00:14:14.781 "iscsi_create_target_node", 00:14:14.781 "iscsi_get_target_nodes", 00:14:14.781 "iscsi_delete_initiator_group", 00:14:14.781 "iscsi_initiator_group_remove_initiators", 00:14:14.781 "iscsi_initiator_group_add_initiators", 00:14:14.781 "iscsi_create_initiator_group", 00:14:14.781 "iscsi_get_initiator_groups", 00:14:14.781 "nvmf_set_crdt", 00:14:14.781 "nvmf_set_config", 00:14:14.781 "nvmf_set_max_subsystems", 00:14:14.781 "nvmf_stop_mdns_prr", 00:14:14.781 "nvmf_publish_mdns_prr", 00:14:14.781 "nvmf_subsystem_get_listeners", 00:14:14.781 "nvmf_subsystem_get_qpairs", 00:14:14.781 "nvmf_subsystem_get_controllers", 00:14:14.781 "nvmf_get_stats", 00:14:14.781 "nvmf_get_transports", 00:14:14.781 "nvmf_create_transport", 00:14:14.781 "nvmf_get_targets", 00:14:14.781 "nvmf_delete_target", 00:14:14.781 "nvmf_create_target", 00:14:14.781 "nvmf_subsystem_allow_any_host", 00:14:14.781 "nvmf_subsystem_remove_host", 00:14:14.781 "nvmf_subsystem_add_host", 00:14:14.781 "nvmf_ns_remove_host", 00:14:14.781 "nvmf_ns_add_host", 00:14:14.781 "nvmf_subsystem_remove_ns", 00:14:14.781 "nvmf_subsystem_add_ns", 00:14:14.781 "nvmf_subsystem_listener_set_ana_state", 00:14:14.781 "nvmf_discovery_get_referrals", 00:14:14.781 "nvmf_discovery_remove_referral", 00:14:14.781 "nvmf_discovery_add_referral", 00:14:14.781 "nvmf_subsystem_remove_listener", 00:14:14.781 "nvmf_subsystem_add_listener", 00:14:14.781 "nvmf_delete_subsystem", 00:14:14.781 "nvmf_create_subsystem", 00:14:14.781 "nvmf_get_subsystems", 00:14:14.781 "env_dpdk_get_mem_stats", 00:14:14.781 "nbd_get_disks", 00:14:14.781 "nbd_stop_disk", 00:14:14.781 "nbd_start_disk", 00:14:14.781 "ublk_recover_disk", 00:14:14.781 "ublk_get_disks", 00:14:14.781 "ublk_stop_disk", 00:14:14.781 "ublk_start_disk", 00:14:14.781 "ublk_destroy_target", 00:14:14.781 "ublk_create_target", 00:14:14.781 "virtio_blk_create_transport", 00:14:14.781 "virtio_blk_get_transports", 00:14:14.781 "vhost_controller_set_coalescing", 00:14:14.781 "vhost_get_controllers", 00:14:14.781 "vhost_delete_controller", 00:14:14.781 "vhost_create_blk_controller", 00:14:14.781 "vhost_scsi_controller_remove_target", 00:14:14.781 "vhost_scsi_controller_add_target", 00:14:14.781 "vhost_start_scsi_controller", 00:14:14.781 "vhost_create_scsi_controller", 00:14:14.781 "thread_set_cpumask", 00:14:14.781 "framework_get_scheduler", 00:14:14.781 "framework_set_scheduler", 00:14:14.781 "framework_get_reactors", 00:14:14.781 "thread_get_io_channels", 00:14:14.781 "thread_get_pollers", 00:14:14.781 "thread_get_stats", 00:14:14.781 "framework_monitor_context_switch", 00:14:14.781 "spdk_kill_instance", 00:14:14.781 "log_enable_timestamps", 00:14:14.781 "log_get_flags", 00:14:14.781 "log_clear_flag", 00:14:14.781 "log_set_flag", 00:14:14.781 "log_get_level", 00:14:14.781 "log_set_level", 00:14:14.781 "log_get_print_level", 00:14:14.781 "log_set_print_level", 00:14:14.781 "framework_enable_cpumask_locks", 00:14:14.781 "framework_disable_cpumask_locks", 00:14:14.781 "framework_wait_init", 00:14:14.781 "framework_start_init", 00:14:14.781 "scsi_get_devices", 00:14:14.781 "bdev_get_histogram", 00:14:14.781 "bdev_enable_histogram", 00:14:14.781 "bdev_set_qos_limit", 00:14:14.781 "bdev_set_qd_sampling_period", 00:14:14.781 "bdev_get_bdevs", 00:14:14.781 "bdev_reset_iostat", 00:14:14.781 "bdev_get_iostat", 00:14:14.781 "bdev_examine", 00:14:14.781 "bdev_wait_for_examine", 00:14:14.781 "bdev_set_options", 00:14:14.781 "notify_get_notifications", 00:14:14.781 "notify_get_types", 00:14:14.781 "accel_get_stats", 00:14:14.781 "accel_set_options", 00:14:14.781 "accel_set_driver", 00:14:14.781 "accel_crypto_key_destroy", 00:14:14.781 "accel_crypto_keys_get", 00:14:14.781 "accel_crypto_key_create", 00:14:14.781 "accel_assign_opc", 00:14:14.781 "accel_get_module_info", 00:14:14.781 "accel_get_opc_assignments", 00:14:14.781 "vmd_rescan", 00:14:14.782 "vmd_remove_device", 00:14:14.782 "vmd_enable", 00:14:14.782 "sock_get_default_impl", 00:14:14.782 "sock_set_default_impl", 00:14:14.782 "sock_impl_set_options", 00:14:14.782 "sock_impl_get_options", 00:14:14.782 "iobuf_get_stats", 00:14:14.782 "iobuf_set_options", 00:14:14.782 "keyring_get_keys", 00:14:14.782 "framework_get_pci_devices", 00:14:14.782 "framework_get_config", 00:14:14.782 "framework_get_subsystems", 00:14:14.782 "vfu_tgt_set_base_path", 00:14:14.782 "trace_get_info", 00:14:14.782 "trace_get_tpoint_group_mask", 00:14:14.782 "trace_disable_tpoint_group", 00:14:14.782 "trace_enable_tpoint_group", 00:14:14.782 "trace_clear_tpoint_mask", 00:14:14.782 "trace_set_tpoint_mask", 00:14:14.782 "spdk_get_version", 00:14:14.782 "rpc_get_methods" 00:14:14.782 ] 00:14:14.782 11:22:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:14:14.782 11:22:39 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:14.782 11:22:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:14.782 11:22:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:14.782 11:22:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3772656 00:14:14.782 11:22:39 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 3772656 ']' 00:14:14.782 11:22:39 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 3772656 00:14:14.782 11:22:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:14:14.782 11:22:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:14.782 11:22:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3772656 00:14:15.041 11:22:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:15.041 11:22:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:15.041 11:22:39 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3772656' 00:14:15.041 killing process with pid 3772656 00:14:15.041 11:22:39 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 3772656 00:14:15.041 11:22:39 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 3772656 00:14:15.300 00:14:15.300 real 0m1.742s 00:14:15.300 user 0m3.202s 00:14:15.300 sys 0m0.572s 00:14:15.300 11:22:40 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:15.300 11:22:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:15.300 ************************************ 00:14:15.300 END TEST spdkcli_tcp 00:14:15.300 ************************************ 00:14:15.300 11:22:40 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:14:15.300 11:22:40 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:15.300 11:22:40 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:15.300 11:22:40 -- common/autotest_common.sh@10 -- # set +x 00:14:15.300 ************************************ 00:14:15.300 START TEST dpdk_mem_utility 00:14:15.300 ************************************ 00:14:15.300 11:22:40 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:14:15.300 * Looking for test storage... 00:14:15.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:14:15.559 11:22:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:14:15.559 11:22:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3773110 00:14:15.559 11:22:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3773110 00:14:15.559 11:22:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:14:15.559 11:22:40 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 3773110 ']' 00:14:15.559 11:22:40 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.559 11:22:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:15.559 11:22:40 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.559 11:22:40 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:15.559 11:22:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:15.559 [2024-06-10 11:22:40.466408] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:15.559 [2024-06-10 11:22:40.466478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773110 ] 00:14:15.559 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.559 [2024-06-10 11:22:40.587513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.818 [2024-06-10 11:22:40.672089] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.387 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:16.387 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:14:16.387 11:22:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:14:16.387 11:22:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:14:16.387 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.387 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:16.387 { 00:14:16.387 "filename": "/tmp/spdk_mem_dump.txt" 00:14:16.387 } 00:14:16.387 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.387 11:22:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:14:16.387 DPDK memory size 814.000000 MiB in 1 heap(s) 00:14:16.387 1 heaps totaling size 814.000000 MiB 00:14:16.387 size: 814.000000 MiB heap id: 0 00:14:16.387 end heaps---------- 00:14:16.387 8 mempools totaling size 598.116089 MiB 00:14:16.387 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:14:16.387 size: 158.602051 MiB name: PDU_data_out_Pool 00:14:16.387 size: 84.521057 MiB name: bdev_io_3773110 00:14:16.387 size: 51.011292 MiB name: evtpool_3773110 00:14:16.387 size: 50.003479 MiB name: msgpool_3773110 00:14:16.387 size: 21.763794 MiB name: PDU_Pool 00:14:16.387 size: 19.513306 MiB name: SCSI_TASK_Pool 00:14:16.387 size: 0.026123 MiB name: Session_Pool 00:14:16.387 end mempools------- 00:14:16.387 6 memzones totaling size 4.142822 MiB 00:14:16.387 size: 1.000366 MiB name: RG_ring_0_3773110 00:14:16.387 size: 1.000366 MiB name: RG_ring_1_3773110 00:14:16.387 size: 1.000366 MiB name: RG_ring_4_3773110 00:14:16.387 size: 1.000366 MiB name: RG_ring_5_3773110 00:14:16.387 size: 0.125366 MiB name: RG_ring_2_3773110 00:14:16.387 size: 0.015991 MiB name: RG_ring_3_3773110 00:14:16.387 end memzones------- 00:14:16.387 11:22:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:14:16.387 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:14:16.387 list of free elements. size: 12.519348 MiB 00:14:16.387 element at address: 0x200000400000 with size: 1.999512 MiB 00:14:16.387 element at address: 0x200018e00000 with size: 0.999878 MiB 00:14:16.387 element at address: 0x200019000000 with size: 0.999878 MiB 00:14:16.387 element at address: 0x200003e00000 with size: 0.996277 MiB 00:14:16.387 element at address: 0x200031c00000 with size: 0.994446 MiB 00:14:16.387 element at address: 0x200013800000 with size: 0.978699 MiB 00:14:16.387 element at address: 0x200007000000 with size: 0.959839 MiB 00:14:16.387 element at address: 0x200019200000 with size: 0.936584 MiB 00:14:16.387 element at address: 0x200000200000 with size: 0.841614 MiB 00:14:16.387 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:14:16.387 element at address: 0x20000b200000 with size: 0.490723 MiB 00:14:16.387 element at address: 0x200000800000 with size: 0.487793 MiB 00:14:16.387 element at address: 0x200019400000 with size: 0.485657 MiB 00:14:16.387 element at address: 0x200027e00000 with size: 0.410034 MiB 00:14:16.387 element at address: 0x200003a00000 with size: 0.355530 MiB 00:14:16.387 list of standard malloc elements. size: 199.218079 MiB 00:14:16.387 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:14:16.387 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:14:16.387 element at address: 0x200018efff80 with size: 1.000122 MiB 00:14:16.387 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:14:16.387 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:14:16.387 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:14:16.387 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:14:16.387 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:14:16.387 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:14:16.387 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:14:16.387 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:14:16.387 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:14:16.387 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:14:16.387 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:14:16.387 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:14:16.387 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:14:16.387 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:14:16.387 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:14:16.387 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:14:16.387 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:14:16.387 element at address: 0x200003adb300 with size: 0.000183 MiB 00:14:16.387 element at address: 0x200003adb500 with size: 0.000183 MiB 00:14:16.387 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:14:16.387 element at address: 0x200003affa80 with size: 0.000183 MiB 00:14:16.387 element at address: 0x200003affb40 with size: 0.000183 MiB 00:14:16.387 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:14:16.387 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:14:16.387 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:14:16.387 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:14:16.387 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:14:16.387 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:14:16.387 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:14:16.387 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:14:16.387 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:14:16.387 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:14:16.387 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:14:16.387 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:14:16.387 element at address: 0x200027e69040 with size: 0.000183 MiB 00:14:16.387 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:14:16.387 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:14:16.387 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:14:16.387 list of memzone associated elements. size: 602.262573 MiB 00:14:16.387 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:14:16.387 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:14:16.387 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:14:16.387 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:14:16.387 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:14:16.387 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3773110_0 00:14:16.387 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:14:16.387 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3773110_0 00:14:16.387 element at address: 0x200003fff380 with size: 48.003052 MiB 00:14:16.387 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3773110_0 00:14:16.387 element at address: 0x2000195be940 with size: 20.255554 MiB 00:14:16.387 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:14:16.387 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:14:16.387 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:14:16.387 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:14:16.387 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3773110 00:14:16.387 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:14:16.387 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3773110 00:14:16.387 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:14:16.387 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3773110 00:14:16.387 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:14:16.388 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:14:16.388 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:14:16.388 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:14:16.388 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:14:16.388 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:14:16.388 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:14:16.388 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:14:16.388 element at address: 0x200003eff180 with size: 1.000488 MiB 00:14:16.388 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3773110 00:14:16.388 element at address: 0x200003affc00 with size: 1.000488 MiB 00:14:16.388 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3773110 00:14:16.388 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:14:16.388 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3773110 00:14:16.388 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:14:16.388 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3773110 00:14:16.388 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:14:16.388 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3773110 00:14:16.388 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:14:16.388 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:14:16.388 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:14:16.388 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:14:16.388 element at address: 0x20001947c540 with size: 0.250488 MiB 00:14:16.388 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:14:16.388 element at address: 0x200003adf880 with size: 0.125488 MiB 00:14:16.388 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3773110 00:14:16.388 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:14:16.388 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:14:16.388 element at address: 0x200027e69100 with size: 0.023743 MiB 00:14:16.388 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:14:16.388 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:14:16.388 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3773110 00:14:16.388 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:14:16.388 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:14:16.388 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:14:16.388 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3773110 00:14:16.388 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:14:16.388 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3773110 00:14:16.388 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:14:16.388 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:14:16.388 11:22:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:14:16.388 11:22:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3773110 00:14:16.388 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 3773110 ']' 00:14:16.388 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 3773110 00:14:16.388 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:14:16.388 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:16.388 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3773110 00:14:16.647 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:16.647 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:16.647 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3773110' 00:14:16.647 killing process with pid 3773110 00:14:16.647 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 3773110 00:14:16.647 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 3773110 00:14:16.906 00:14:16.906 real 0m1.531s 00:14:16.906 user 0m1.568s 00:14:16.906 sys 0m0.536s 00:14:16.906 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:16.906 11:22:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:14:16.906 ************************************ 00:14:16.906 END TEST dpdk_mem_utility 00:14:16.906 ************************************ 00:14:16.906 11:22:41 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:14:16.906 11:22:41 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:16.906 11:22:41 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:16.906 11:22:41 -- common/autotest_common.sh@10 -- # set +x 00:14:16.906 ************************************ 00:14:16.906 START TEST event 00:14:16.906 ************************************ 00:14:16.906 11:22:41 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:14:16.906 * Looking for test storage... 00:14:17.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:14:17.164 11:22:42 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:14:17.164 11:22:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:14:17.164 11:22:42 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:14:17.164 11:22:42 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:14:17.164 11:22:42 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:17.164 11:22:42 event -- common/autotest_common.sh@10 -- # set +x 00:14:17.164 ************************************ 00:14:17.164 START TEST event_perf 00:14:17.164 ************************************ 00:14:17.164 11:22:42 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:14:17.164 Running I/O for 1 seconds...[2024-06-10 11:22:42.085493] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:17.164 [2024-06-10 11:22:42.085588] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773479 ] 00:14:17.164 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.164 [2024-06-10 11:22:42.205997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.421 [2024-06-10 11:22:42.294946] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.421 [2024-06-10 11:22:42.295038] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.421 [2024-06-10 11:22:42.298593] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.421 [2024-06-10 11:22:42.298598] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.356 Running I/O for 1 seconds... 00:14:18.356 lcore 0: 186906 00:14:18.356 lcore 1: 186906 00:14:18.356 lcore 2: 186907 00:14:18.356 lcore 3: 186908 00:14:18.356 done. 00:14:18.356 00:14:18.356 real 0m1.314s 00:14:18.356 user 0m4.187s 00:14:18.356 sys 0m0.126s 00:14:18.356 11:22:43 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:18.356 11:22:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:14:18.356 ************************************ 00:14:18.356 END TEST event_perf 00:14:18.356 ************************************ 00:14:18.356 11:22:43 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:14:18.356 11:22:43 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:14:18.356 11:22:43 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:18.356 11:22:43 event -- common/autotest_common.sh@10 -- # set +x 00:14:18.356 ************************************ 00:14:18.356 START TEST event_reactor 00:14:18.356 ************************************ 00:14:18.356 11:22:43 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:14:18.616 [2024-06-10 11:22:43.479668] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:18.616 [2024-06-10 11:22:43.479739] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773691 ] 00:14:18.616 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.616 [2024-06-10 11:22:43.600853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.616 [2024-06-10 11:22:43.685349] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.995 test_start 00:14:19.995 oneshot 00:14:19.995 tick 100 00:14:19.995 tick 100 00:14:19.995 tick 250 00:14:19.995 tick 100 00:14:19.995 tick 100 00:14:19.995 tick 250 00:14:19.995 tick 100 00:14:19.995 tick 500 00:14:19.995 tick 100 00:14:19.995 tick 100 00:14:19.995 tick 250 00:14:19.995 tick 100 00:14:19.995 tick 100 00:14:19.995 test_end 00:14:19.995 00:14:19.995 real 0m1.303s 00:14:19.995 user 0m1.164s 00:14:19.995 sys 0m0.133s 00:14:19.995 11:22:44 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:19.995 11:22:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:14:19.995 ************************************ 00:14:19.995 END TEST event_reactor 00:14:19.995 ************************************ 00:14:19.995 11:22:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:14:19.995 11:22:44 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:14:19.995 11:22:44 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:19.995 11:22:44 event -- common/autotest_common.sh@10 -- # set +x 00:14:19.995 ************************************ 00:14:19.995 START TEST event_reactor_perf 00:14:19.995 ************************************ 00:14:19.995 11:22:44 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:14:19.995 [2024-06-10 11:22:44.865118] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:19.995 [2024-06-10 11:22:44.865201] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773892 ] 00:14:19.995 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.995 [2024-06-10 11:22:44.989297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.995 [2024-06-10 11:22:45.072565] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.372 test_start 00:14:21.372 test_end 00:14:21.372 Performance: 354346 events per second 00:14:21.372 00:14:21.372 real 0m1.305s 00:14:21.372 user 0m1.161s 00:14:21.372 sys 0m0.138s 00:14:21.372 11:22:46 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:21.372 11:22:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:14:21.372 ************************************ 00:14:21.372 END TEST event_reactor_perf 00:14:21.372 ************************************ 00:14:21.372 11:22:46 event -- event/event.sh@49 -- # uname -s 00:14:21.372 11:22:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:14:21.372 11:22:46 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:14:21.372 11:22:46 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:21.372 11:22:46 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:21.372 11:22:46 event -- common/autotest_common.sh@10 -- # set +x 00:14:21.372 ************************************ 00:14:21.372 START TEST event_scheduler 00:14:21.372 ************************************ 00:14:21.372 11:22:46 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:14:21.372 * Looking for test storage... 00:14:21.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:14:21.372 11:22:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:14:21.372 11:22:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3774178 00:14:21.372 11:22:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:14:21.372 11:22:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:14:21.372 11:22:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3774178 00:14:21.372 11:22:46 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 3774178 ']' 00:14:21.372 11:22:46 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.372 11:22:46 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:21.372 11:22:46 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.372 11:22:46 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:21.372 11:22:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:21.372 [2024-06-10 11:22:46.386967] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:21.372 [2024-06-10 11:22:46.387030] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774178 ] 00:14:21.372 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.631 [2024-06-10 11:22:46.481227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:21.631 [2024-06-10 11:22:46.555699] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.631 [2024-06-10 11:22:46.555784] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.631 [2024-06-10 11:22:46.555893] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.631 [2024-06-10 11:22:46.555895] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:22.569 11:22:47 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:22.569 11:22:47 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:14:22.569 11:22:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:14:22.569 11:22:47 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.569 11:22:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:22.569 POWER: Env isn't set yet! 00:14:22.569 POWER: Attempting to initialise ACPI cpufreq power management... 00:14:22.569 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:14:22.569 POWER: Cannot set governor of lcore 0 to userspace 00:14:22.569 POWER: Attempting to initialise PSTAT power management... 00:14:22.569 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:14:22.569 POWER: Initialized successfully for lcore 0 power management 00:14:22.569 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:14:22.569 POWER: Initialized successfully for lcore 1 power management 00:14:22.569 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:14:22.569 POWER: Initialized successfully for lcore 2 power management 00:14:22.569 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:14:22.569 POWER: Initialized successfully for lcore 3 power management 00:14:22.569 [2024-06-10 11:22:47.339801] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:14:22.569 [2024-06-10 11:22:47.339817] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:14:22.569 [2024-06-10 11:22:47.339827] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:14:22.569 11:22:47 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.569 11:22:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:14:22.569 11:22:47 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.569 11:22:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:22.569 [2024-06-10 11:22:47.413603] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:14:22.569 11:22:47 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.569 11:22:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:14:22.569 11:22:47 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:22.569 11:22:47 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:22.569 11:22:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:22.569 ************************************ 00:14:22.569 START TEST scheduler_create_thread 00:14:22.569 ************************************ 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:22.569 2 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:22.569 3 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:22.569 4 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:22.569 5 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:22.569 6 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:22.569 7 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:22.569 8 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:22.569 9 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:22.569 10 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:22.569 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:23.136 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.136 11:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:14:23.136 11:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:14:23.136 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.136 11:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:24.071 11:22:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:24.071 11:22:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:14:24.071 11:22:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:24.071 11:22:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:25.007 11:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:25.007 11:22:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:14:25.007 11:22:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:14:25.007 11:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:25.007 11:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:25.940 11:22:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:25.940 00:14:25.940 real 0m3.230s 00:14:25.940 user 0m0.027s 00:14:25.940 sys 0m0.004s 00:14:25.940 11:22:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:25.940 11:22:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:14:25.940 ************************************ 00:14:25.940 END TEST scheduler_create_thread 00:14:25.940 ************************************ 00:14:25.940 11:22:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:14:25.940 11:22:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3774178 00:14:25.940 11:22:50 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 3774178 ']' 00:14:25.940 11:22:50 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 3774178 00:14:25.940 11:22:50 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:14:25.940 11:22:50 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:25.940 11:22:50 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3774178 00:14:25.940 11:22:50 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:14:25.940 11:22:50 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:14:25.940 11:22:50 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3774178' 00:14:25.940 killing process with pid 3774178 00:14:25.940 11:22:50 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 3774178 00:14:25.940 11:22:50 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 3774178 00:14:26.199 [2024-06-10 11:22:51.066786] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:14:26.199 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:14:26.199 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:14:26.199 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:14:26.199 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:14:26.199 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:14:26.199 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:14:26.199 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:14:26.199 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:14:26.458 00:14:26.458 real 0m5.088s 00:14:26.458 user 0m10.544s 00:14:26.458 sys 0m0.483s 00:14:26.458 11:22:51 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:26.458 11:22:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:14:26.458 ************************************ 00:14:26.458 END TEST event_scheduler 00:14:26.458 ************************************ 00:14:26.458 11:22:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:14:26.458 11:22:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:14:26.458 11:22:51 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:26.458 11:22:51 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:26.458 11:22:51 event -- common/autotest_common.sh@10 -- # set +x 00:14:26.458 ************************************ 00:14:26.458 START TEST app_repeat 00:14:26.458 ************************************ 00:14:26.458 11:22:51 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3775230 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3775230' 00:14:26.458 Process app_repeat pid: 3775230 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:14:26.458 spdk_app_start Round 0 00:14:26.458 11:22:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3775230 /var/tmp/spdk-nbd.sock 00:14:26.458 11:22:51 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3775230 ']' 00:14:26.458 11:22:51 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:26.458 11:22:51 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:26.458 11:22:51 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:26.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:26.458 11:22:51 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:26.458 11:22:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:26.458 [2024-06-10 11:22:51.443171] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:26.458 [2024-06-10 11:22:51.443229] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775230 ] 00:14:26.458 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.458 [2024-06-10 11:22:51.562005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:26.717 [2024-06-10 11:22:51.649298] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.717 [2024-06-10 11:22:51.649303] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.284 11:22:52 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:27.284 11:22:52 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:14:27.284 11:22:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:27.543 Malloc0 00:14:27.543 11:22:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:27.803 Malloc1 00:14:27.803 11:22:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:27.803 11:22:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:28.062 /dev/nbd0 00:14:28.062 11:22:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:28.062 11:22:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:28.062 1+0 records in 00:14:28.062 1+0 records out 00:14:28.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215301 s, 19.0 MB/s 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:14:28.062 11:22:53 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:14:28.062 11:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.062 11:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:28.062 11:22:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:28.321 /dev/nbd1 00:14:28.321 11:22:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:28.321 11:22:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:28.321 1+0 records in 00:14:28.321 1+0 records out 00:14:28.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240658 s, 17.0 MB/s 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:14:28.321 11:22:53 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:14:28.321 11:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.321 11:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:28.321 11:22:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:28.321 11:22:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:28.321 11:22:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:28.579 11:22:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:28.579 { 00:14:28.579 "nbd_device": "/dev/nbd0", 00:14:28.579 "bdev_name": "Malloc0" 00:14:28.579 }, 00:14:28.579 { 00:14:28.579 "nbd_device": "/dev/nbd1", 00:14:28.579 "bdev_name": "Malloc1" 00:14:28.579 } 00:14:28.579 ]' 00:14:28.579 11:22:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:28.579 { 00:14:28.579 "nbd_device": "/dev/nbd0", 00:14:28.579 "bdev_name": "Malloc0" 00:14:28.579 }, 00:14:28.579 { 00:14:28.579 "nbd_device": "/dev/nbd1", 00:14:28.579 "bdev_name": "Malloc1" 00:14:28.579 } 00:14:28.579 ]' 00:14:28.579 11:22:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:28.579 11:22:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:28.579 /dev/nbd1' 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:28.838 /dev/nbd1' 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:28.838 256+0 records in 00:14:28.838 256+0 records out 00:14:28.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114766 s, 91.4 MB/s 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:28.838 256+0 records in 00:14:28.838 256+0 records out 00:14:28.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173993 s, 60.3 MB/s 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:28.838 256+0 records in 00:14:28.838 256+0 records out 00:14:28.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261936 s, 40.0 MB/s 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.838 11:22:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:29.097 11:22:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:29.097 11:22:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:29.097 11:22:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:29.097 11:22:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.097 11:22:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.097 11:22:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:29.097 11:22:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:29.097 11:22:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.097 11:22:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.097 11:22:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:29.356 11:22:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:29.356 11:22:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:29.356 11:22:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:29.356 11:22:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.356 11:22:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.356 11:22:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:29.356 11:22:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:29.356 11:22:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.356 11:22:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:29.356 11:22:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:29.356 11:22:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:29.615 11:22:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:29.615 11:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:29.615 11:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:29.615 11:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:29.615 11:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:29.615 11:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:29.615 11:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:14:29.615 11:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:14:29.615 11:22:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:29.615 11:22:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:14:29.615 11:22:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:29.615 11:22:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:14:29.615 11:22:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:29.874 11:22:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:14:30.174 [2024-06-10 11:22:54.981431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:30.174 [2024-06-10 11:22:55.059527] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.174 [2024-06-10 11:22:55.059532] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.174 [2024-06-10 11:22:55.103249] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:30.174 [2024-06-10 11:22:55.103298] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:32.752 11:22:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:32.752 11:22:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:14:32.752 spdk_app_start Round 1 00:14:32.752 11:22:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3775230 /var/tmp/spdk-nbd.sock 00:14:32.752 11:22:57 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3775230 ']' 00:14:32.752 11:22:57 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:32.752 11:22:57 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:32.752 11:22:57 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:32.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:32.752 11:22:57 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:32.752 11:22:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:33.010 11:22:58 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:33.010 11:22:58 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:14:33.010 11:22:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:33.268 Malloc0 00:14:33.268 11:22:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:33.526 Malloc1 00:14:33.526 11:22:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:33.526 11:22:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:33.785 /dev/nbd0 00:14:33.785 11:22:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:33.785 11:22:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:33.785 1+0 records in 00:14:33.785 1+0 records out 00:14:33.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262622 s, 15.6 MB/s 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:14:33.785 11:22:58 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:14:33.785 11:22:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.785 11:22:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:33.785 11:22:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:34.044 /dev/nbd1 00:14:34.044 11:22:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:34.044 11:22:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:34.044 1+0 records in 00:14:34.044 1+0 records out 00:14:34.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024878 s, 16.5 MB/s 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:14:34.044 11:22:59 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:14:34.044 11:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:34.044 11:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:34.044 11:22:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:34.044 11:22:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:34.044 11:22:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:34.303 11:22:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:34.303 { 00:14:34.303 "nbd_device": "/dev/nbd0", 00:14:34.303 "bdev_name": "Malloc0" 00:14:34.303 }, 00:14:34.303 { 00:14:34.303 "nbd_device": "/dev/nbd1", 00:14:34.303 "bdev_name": "Malloc1" 00:14:34.303 } 00:14:34.303 ]' 00:14:34.303 11:22:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:34.303 { 00:14:34.303 "nbd_device": "/dev/nbd0", 00:14:34.303 "bdev_name": "Malloc0" 00:14:34.303 }, 00:14:34.303 { 00:14:34.303 "nbd_device": "/dev/nbd1", 00:14:34.303 "bdev_name": "Malloc1" 00:14:34.303 } 00:14:34.303 ]' 00:14:34.303 11:22:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:34.303 11:22:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:34.303 /dev/nbd1' 00:14:34.303 11:22:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:34.303 /dev/nbd1' 00:14:34.303 11:22:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:34.303 11:22:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:34.304 256+0 records in 00:14:34.304 256+0 records out 00:14:34.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105633 s, 99.3 MB/s 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:34.304 256+0 records in 00:14:34.304 256+0 records out 00:14:34.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02781 s, 37.7 MB/s 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:34.304 256+0 records in 00:14:34.304 256+0 records out 00:14:34.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018475 s, 56.8 MB/s 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:34.304 11:22:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.563 11:22:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.822 11:22:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:34.822 11:22:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:34.822 11:22:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:34.822 11:22:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:34.822 11:22:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.822 11:22:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.822 11:22:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:34.822 11:22:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:34.822 11:22:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.822 11:22:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:34.822 11:22:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:34.822 11:22:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:35.081 11:23:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:35.081 11:23:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:35.081 11:23:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:35.081 11:23:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:35.081 11:23:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:35.081 11:23:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:35.081 11:23:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:14:35.081 11:23:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:14:35.081 11:23:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:35.081 11:23:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:14:35.081 11:23:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:35.081 11:23:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:14:35.081 11:23:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:35.339 11:23:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:14:35.597 [2024-06-10 11:23:00.546039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:35.597 [2024-06-10 11:23:00.622551] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.597 [2024-06-10 11:23:00.622556] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.597 [2024-06-10 11:23:00.667484] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:35.597 [2024-06-10 11:23:00.667532] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:38.883 11:23:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:38.883 11:23:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:14:38.883 spdk_app_start Round 2 00:14:38.883 11:23:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3775230 /var/tmp/spdk-nbd.sock 00:14:38.883 11:23:03 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3775230 ']' 00:14:38.883 11:23:03 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:38.883 11:23:03 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:38.883 11:23:03 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:38.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:38.883 11:23:03 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:38.883 11:23:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:38.883 11:23:03 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:38.883 11:23:03 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:14:38.883 11:23:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:38.883 Malloc0 00:14:38.883 11:23:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:38.883 Malloc1 00:14:38.884 11:23:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:38.884 11:23:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:39.142 /dev/nbd0 00:14:39.142 11:23:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:39.142 11:23:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:39.142 1+0 records in 00:14:39.142 1+0 records out 00:14:39.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243384 s, 16.8 MB/s 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:14:39.142 11:23:04 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:14:39.142 11:23:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.142 11:23:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:39.142 11:23:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:39.401 /dev/nbd1 00:14:39.401 11:23:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:39.401 11:23:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:39.401 1+0 records in 00:14:39.401 1+0 records out 00:14:39.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235315 s, 17.4 MB/s 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:14:39.401 11:23:04 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:14:39.401 11:23:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.401 11:23:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:39.401 11:23:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:39.401 11:23:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:39.401 11:23:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:39.659 { 00:14:39.659 "nbd_device": "/dev/nbd0", 00:14:39.659 "bdev_name": "Malloc0" 00:14:39.659 }, 00:14:39.659 { 00:14:39.659 "nbd_device": "/dev/nbd1", 00:14:39.659 "bdev_name": "Malloc1" 00:14:39.659 } 00:14:39.659 ]' 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:39.659 { 00:14:39.659 "nbd_device": "/dev/nbd0", 00:14:39.659 "bdev_name": "Malloc0" 00:14:39.659 }, 00:14:39.659 { 00:14:39.659 "nbd_device": "/dev/nbd1", 00:14:39.659 "bdev_name": "Malloc1" 00:14:39.659 } 00:14:39.659 ]' 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:39.659 /dev/nbd1' 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:39.659 /dev/nbd1' 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:39.659 256+0 records in 00:14:39.659 256+0 records out 00:14:39.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105506 s, 99.4 MB/s 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:39.659 256+0 records in 00:14:39.659 256+0 records out 00:14:39.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172107 s, 60.9 MB/s 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:39.659 256+0 records in 00:14:39.659 256+0 records out 00:14:39.659 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286936 s, 36.5 MB/s 00:14:39.659 11:23:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:39.660 11:23:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:39.660 11:23:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:39.660 11:23:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:39.660 11:23:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:39.660 11:23:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:39.660 11:23:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:39.660 11:23:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:39.660 11:23:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:14:39.660 11:23:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:39.660 11:23:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:14:39.919 11:23:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:14:39.919 11:23:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:39.919 11:23:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:39.919 11:23:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:39.919 11:23:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.919 11:23:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:39.919 11:23:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.919 11:23:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:39.919 11:23:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.919 11:23:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.919 11:23:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.919 11:23:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.178 11:23:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:40.437 11:23:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:40.437 11:23:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:40.437 11:23:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:40.696 11:23:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:40.696 11:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:40.696 11:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:40.696 11:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:14:40.696 11:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:14:40.696 11:23:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:40.696 11:23:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:14:40.696 11:23:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:40.696 11:23:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:14:40.696 11:23:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:40.955 11:23:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:14:40.955 [2024-06-10 11:23:06.025517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:41.213 [2024-06-10 11:23:06.102306] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.213 [2024-06-10 11:23:06.102311] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.213 [2024-06-10 11:23:06.145709] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:41.213 [2024-06-10 11:23:06.145756] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:43.746 11:23:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3775230 /var/tmp/spdk-nbd.sock 00:14:43.746 11:23:08 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3775230 ']' 00:14:43.746 11:23:08 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:43.746 11:23:08 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:43.746 11:23:08 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:43.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:43.746 11:23:08 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:43.746 11:23:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:44.005 11:23:09 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:44.005 11:23:09 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:14:44.005 11:23:09 event.app_repeat -- event/event.sh@39 -- # killprocess 3775230 00:14:44.005 11:23:09 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 3775230 ']' 00:14:44.005 11:23:09 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 3775230 00:14:44.005 11:23:09 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:14:44.005 11:23:09 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:44.005 11:23:09 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3775230 00:14:44.005 11:23:09 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:44.005 11:23:09 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:44.005 11:23:09 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3775230' 00:14:44.005 killing process with pid 3775230 00:14:44.005 11:23:09 event.app_repeat -- common/autotest_common.sh@968 -- # kill 3775230 00:14:44.005 11:23:09 event.app_repeat -- common/autotest_common.sh@973 -- # wait 3775230 00:14:44.333 spdk_app_start is called in Round 0. 00:14:44.333 Shutdown signal received, stop current app iteration 00:14:44.333 Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 reinitialization... 00:14:44.333 spdk_app_start is called in Round 1. 00:14:44.333 Shutdown signal received, stop current app iteration 00:14:44.333 Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 reinitialization... 00:14:44.333 spdk_app_start is called in Round 2. 00:14:44.333 Shutdown signal received, stop current app iteration 00:14:44.333 Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 reinitialization... 00:14:44.333 spdk_app_start is called in Round 3. 00:14:44.333 Shutdown signal received, stop current app iteration 00:14:44.334 11:23:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:14:44.334 11:23:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:14:44.334 00:14:44.334 real 0m17.859s 00:14:44.334 user 0m38.452s 00:14:44.334 sys 0m3.581s 00:14:44.334 11:23:09 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:44.334 11:23:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:44.334 ************************************ 00:14:44.334 END TEST app_repeat 00:14:44.334 ************************************ 00:14:44.334 11:23:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:14:44.334 11:23:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:14:44.334 11:23:09 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:44.334 11:23:09 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:44.334 11:23:09 event -- common/autotest_common.sh@10 -- # set +x 00:14:44.334 ************************************ 00:14:44.334 START TEST cpu_locks 00:14:44.334 ************************************ 00:14:44.334 11:23:09 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:14:44.593 * Looking for test storage... 00:14:44.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:14:44.593 11:23:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:14:44.593 11:23:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:14:44.593 11:23:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:14:44.593 11:23:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:14:44.593 11:23:09 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:44.593 11:23:09 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:44.593 11:23:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:44.593 ************************************ 00:14:44.593 START TEST default_locks 00:14:44.593 ************************************ 00:14:44.593 11:23:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:14:44.593 11:23:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3778415 00:14:44.593 11:23:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3778415 00:14:44.593 11:23:09 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 3778415 ']' 00:14:44.593 11:23:09 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.593 11:23:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:44.593 11:23:09 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.593 11:23:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:44.593 11:23:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:14:44.593 11:23:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:14:44.593 [2024-06-10 11:23:09.553513] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:44.593 [2024-06-10 11:23:09.553598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3778415 ] 00:14:44.593 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.593 [2024-06-10 11:23:09.675566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.851 [2024-06-10 11:23:09.764517] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.418 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:45.418 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:14:45.418 11:23:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3778415 00:14:45.418 11:23:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3778415 00:14:45.418 11:23:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:45.677 lslocks: write error 00:14:45.677 11:23:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3778415 00:14:45.677 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 3778415 ']' 00:14:45.677 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 3778415 00:14:45.677 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:14:45.677 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:45.677 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3778415 00:14:45.677 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:45.677 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:45.677 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3778415' 00:14:45.677 killing process with pid 3778415 00:14:45.677 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 3778415 00:14:45.677 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 3778415 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3778415 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3778415 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 3778415 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 3778415 ']' 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:14:46.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3778415) - No such process 00:14:46.245 ERROR: process (pid: 3778415) is no longer running 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:14:46.245 00:14:46.245 real 0m1.616s 00:14:46.245 user 0m1.681s 00:14:46.245 sys 0m0.586s 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:46.245 11:23:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:14:46.245 ************************************ 00:14:46.245 END TEST default_locks 00:14:46.245 ************************************ 00:14:46.245 11:23:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:14:46.245 11:23:11 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:46.245 11:23:11 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:46.245 11:23:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:46.245 ************************************ 00:14:46.245 START TEST default_locks_via_rpc 00:14:46.245 ************************************ 00:14:46.245 11:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:14:46.245 11:23:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:14:46.245 11:23:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3778768 00:14:46.245 11:23:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3778768 00:14:46.245 11:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3778768 ']' 00:14:46.245 11:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.245 11:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:46.245 11:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.245 11:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:46.245 11:23:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:46.245 [2024-06-10 11:23:11.244120] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:46.245 [2024-06-10 11:23:11.244174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3778768 ] 00:14:46.245 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.504 [2024-06-10 11:23:11.363506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.504 [2024-06-10 11:23:11.449373] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3778768 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3778768 00:14:47.072 11:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:47.638 11:23:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3778768 00:14:47.638 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 3778768 ']' 00:14:47.638 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 3778768 00:14:47.638 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:14:47.638 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:47.638 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3778768 00:14:47.638 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:47.638 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:47.638 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3778768' 00:14:47.638 killing process with pid 3778768 00:14:47.638 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 3778768 00:14:47.638 11:23:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 3778768 00:14:48.205 00:14:48.205 real 0m1.869s 00:14:48.205 user 0m1.991s 00:14:48.205 sys 0m0.695s 00:14:48.205 11:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:48.205 11:23:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.205 ************************************ 00:14:48.205 END TEST default_locks_via_rpc 00:14:48.205 ************************************ 00:14:48.205 11:23:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:14:48.205 11:23:13 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:48.205 11:23:13 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:48.205 11:23:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:48.205 ************************************ 00:14:48.205 START TEST non_locking_app_on_locked_coremask 00:14:48.205 ************************************ 00:14:48.205 11:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:14:48.205 11:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3779261 00:14:48.205 11:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3779261 /var/tmp/spdk.sock 00:14:48.205 11:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3779261 ']' 00:14:48.205 11:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.205 11:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:48.205 11:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.205 11:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:48.205 11:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:14:48.205 11:23:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:48.205 [2024-06-10 11:23:13.193062] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:48.206 [2024-06-10 11:23:13.193119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779261 ] 00:14:48.206 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.464 [2024-06-10 11:23:13.311953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.464 [2024-06-10 11:23:13.396284] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.030 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:49.030 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:14:49.030 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3779286 00:14:49.030 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3779286 /var/tmp/spdk2.sock 00:14:49.030 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3779286 ']' 00:14:49.030 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:49.030 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:49.030 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:49.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:49.030 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:14:49.030 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:49.030 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:49.030 [2024-06-10 11:23:14.072894] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:49.031 [2024-06-10 11:23:14.072962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779286 ] 00:14:49.031 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.289 [2024-06-10 11:23:14.235035] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:49.289 [2024-06-10 11:23:14.235067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.548 [2024-06-10 11:23:14.403189] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.114 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:50.114 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:14:50.114 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3779261 00:14:50.114 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3779261 00:14:50.114 11:23:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:50.373 lslocks: write error 00:14:50.373 11:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3779261 00:14:50.373 11:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3779261 ']' 00:14:50.373 11:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3779261 00:14:50.373 11:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:14:50.373 11:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:50.373 11:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3779261 00:14:50.632 11:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:50.632 11:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:50.632 11:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3779261' 00:14:50.632 killing process with pid 3779261 00:14:50.632 11:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3779261 00:14:50.632 11:23:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3779261 00:14:51.199 11:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3779286 00:14:51.199 11:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3779286 ']' 00:14:51.199 11:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3779286 00:14:51.199 11:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:14:51.199 11:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:51.199 11:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3779286 00:14:51.199 11:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:51.199 11:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:51.199 11:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3779286' 00:14:51.199 killing process with pid 3779286 00:14:51.199 11:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3779286 00:14:51.199 11:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3779286 00:14:51.458 00:14:51.458 real 0m3.412s 00:14:51.458 user 0m3.674s 00:14:51.458 sys 0m1.090s 00:14:51.458 11:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:51.458 11:23:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:51.458 ************************************ 00:14:51.458 END TEST non_locking_app_on_locked_coremask 00:14:51.458 ************************************ 00:14:51.718 11:23:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:14:51.718 11:23:16 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:51.718 11:23:16 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:51.718 11:23:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:51.718 ************************************ 00:14:51.718 START TEST locking_app_on_unlocked_coremask 00:14:51.718 ************************************ 00:14:51.718 11:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:14:51.718 11:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3779847 00:14:51.718 11:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3779847 /var/tmp/spdk.sock 00:14:51.718 11:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:14:51.718 11:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3779847 ']' 00:14:51.718 11:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.718 11:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:51.718 11:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.718 11:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:51.718 11:23:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:51.718 [2024-06-10 11:23:16.689412] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:51.718 [2024-06-10 11:23:16.689469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779847 ] 00:14:51.718 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.718 [2024-06-10 11:23:16.809434] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:51.718 [2024-06-10 11:23:16.809464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.977 [2024-06-10 11:23:16.893950] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.544 11:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:52.544 11:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:14:52.544 11:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3779983 00:14:52.544 11:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3779983 /var/tmp/spdk2.sock 00:14:52.544 11:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:14:52.544 11:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3779983 ']' 00:14:52.544 11:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:52.544 11:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:52.544 11:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:52.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:52.544 11:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:52.544 11:23:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:52.544 [2024-06-10 11:23:17.633335] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:52.544 [2024-06-10 11:23:17.633403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779983 ] 00:14:52.804 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.804 [2024-06-10 11:23:17.794216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.102 [2024-06-10 11:23:17.962900] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.671 11:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:53.671 11:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:14:53.671 11:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3779983 00:14:53.671 11:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3779983 00:14:53.671 11:23:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:55.048 lslocks: write error 00:14:55.048 11:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3779847 00:14:55.048 11:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3779847 ']' 00:14:55.048 11:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 3779847 00:14:55.048 11:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:14:55.048 11:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:55.048 11:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3779847 00:14:55.048 11:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:55.048 11:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:55.048 11:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3779847' 00:14:55.048 killing process with pid 3779847 00:14:55.048 11:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 3779847 00:14:55.048 11:23:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 3779847 00:14:55.616 11:23:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3779983 00:14:55.616 11:23:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3779983 ']' 00:14:55.616 11:23:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 3779983 00:14:55.616 11:23:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:14:55.616 11:23:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:55.616 11:23:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3779983 00:14:55.616 11:23:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:55.616 11:23:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:55.616 11:23:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3779983' 00:14:55.616 killing process with pid 3779983 00:14:55.616 11:23:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 3779983 00:14:55.616 11:23:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 3779983 00:14:55.875 00:14:55.875 real 0m4.210s 00:14:55.875 user 0m4.582s 00:14:55.875 sys 0m1.460s 00:14:55.875 11:23:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:55.875 11:23:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:55.875 ************************************ 00:14:55.875 END TEST locking_app_on_unlocked_coremask 00:14:55.875 ************************************ 00:14:55.875 11:23:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:14:55.875 11:23:20 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:55.875 11:23:20 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:55.875 11:23:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:55.875 ************************************ 00:14:55.875 START TEST locking_app_on_locked_coremask 00:14:55.875 ************************************ 00:14:55.875 11:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:14:55.875 11:23:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3780664 00:14:55.875 11:23:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3780664 /var/tmp/spdk.sock 00:14:55.875 11:23:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:14:55.875 11:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3780664 ']' 00:14:55.875 11:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.875 11:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:55.875 11:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.875 11:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:55.875 11:23:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:56.135 [2024-06-10 11:23:20.983065] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:56.135 [2024-06-10 11:23:20.983127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780664 ] 00:14:56.135 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.135 [2024-06-10 11:23:21.104374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.135 [2024-06-10 11:23:21.188925] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3780694 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3780694 /var/tmp/spdk2.sock 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3780694 /var/tmp/spdk2.sock 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3780694 /var/tmp/spdk2.sock 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3780694 ']' 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:57.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:57.134 11:23:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:57.134 [2024-06-10 11:23:21.934399] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:57.134 [2024-06-10 11:23:21.934461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3780694 ] 00:14:57.134 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.134 [2024-06-10 11:23:22.099254] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3780664 has claimed it. 00:14:57.134 [2024-06-10 11:23:22.099303] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:14:57.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3780694) - No such process 00:14:57.702 ERROR: process (pid: 3780694) is no longer running 00:14:57.702 11:23:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:57.702 11:23:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:14:57.702 11:23:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:14:57.702 11:23:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:57.702 11:23:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:57.702 11:23:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:57.702 11:23:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3780664 00:14:57.702 11:23:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3780664 00:14:57.702 11:23:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:58.270 lslocks: write error 00:14:58.270 11:23:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3780664 00:14:58.270 11:23:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3780664 ']' 00:14:58.270 11:23:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3780664 00:14:58.270 11:23:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:14:58.270 11:23:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:58.270 11:23:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3780664 00:14:58.270 11:23:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:58.270 11:23:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:58.270 11:23:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3780664' 00:14:58.270 killing process with pid 3780664 00:14:58.270 11:23:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3780664 00:14:58.270 11:23:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3780664 00:14:58.837 00:14:58.837 real 0m2.739s 00:14:58.837 user 0m3.025s 00:14:58.837 sys 0m0.989s 00:14:58.837 11:23:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:58.837 11:23:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:58.837 ************************************ 00:14:58.837 END TEST locking_app_on_locked_coremask 00:14:58.837 ************************************ 00:14:58.837 11:23:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:14:58.837 11:23:23 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:58.837 11:23:23 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:58.837 11:23:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:58.838 ************************************ 00:14:58.838 START TEST locking_overlapped_coremask 00:14:58.838 ************************************ 00:14:58.838 11:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:14:58.838 11:23:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3781108 00:14:58.838 11:23:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3781108 /var/tmp/spdk.sock 00:14:58.838 11:23:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:14:58.838 11:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 3781108 ']' 00:14:58.838 11:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.838 11:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:58.838 11:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.838 11:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:58.838 11:23:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:58.838 [2024-06-10 11:23:23.802393] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:58.838 [2024-06-10 11:23:23.802453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3781108 ] 00:14:58.838 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.838 [2024-06-10 11:23:23.922908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:59.096 [2024-06-10 11:23:24.010365] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.096 [2024-06-10 11:23:24.010459] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.096 [2024-06-10 11:23:24.010462] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3781263 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3781263 /var/tmp/spdk2.sock 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3781263 /var/tmp/spdk2.sock 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3781263 /var/tmp/spdk2.sock 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 3781263 ']' 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:59.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:59.663 11:23:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:59.664 [2024-06-10 11:23:24.767137] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:14:59.664 [2024-06-10 11:23:24.767201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3781263 ] 00:14:59.922 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.922 [2024-06-10 11:23:24.899072] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3781108 has claimed it. 00:14:59.922 [2024-06-10 11:23:24.899120] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:15:00.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3781263) - No such process 00:15:00.489 ERROR: process (pid: 3781263) is no longer running 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3781108 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 3781108 ']' 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 3781108 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3781108 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3781108' 00:15:00.489 killing process with pid 3781108 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 3781108 00:15:00.489 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 3781108 00:15:00.748 00:15:00.748 real 0m2.085s 00:15:00.748 user 0m5.784s 00:15:00.748 sys 0m0.571s 00:15:00.748 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:00.748 11:23:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:00.748 ************************************ 00:15:00.748 END TEST locking_overlapped_coremask 00:15:00.748 ************************************ 00:15:01.007 11:23:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:15:01.007 11:23:25 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:01.007 11:23:25 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:01.007 11:23:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:01.007 ************************************ 00:15:01.007 START TEST locking_overlapped_coremask_via_rpc 00:15:01.007 ************************************ 00:15:01.007 11:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:15:01.007 11:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3781555 00:15:01.007 11:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3781555 /var/tmp/spdk.sock 00:15:01.007 11:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:15:01.007 11:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3781555 ']' 00:15:01.007 11:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.007 11:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:01.007 11:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.007 11:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:01.007 11:23:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.007 [2024-06-10 11:23:25.968277] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:01.007 [2024-06-10 11:23:25.968333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3781555 ] 00:15:01.007 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.007 [2024-06-10 11:23:26.087110] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:01.007 [2024-06-10 11:23:26.087140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:01.266 [2024-06-10 11:23:26.174637] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.266 [2024-06-10 11:23:26.174731] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.266 [2024-06-10 11:23:26.174733] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.833 11:23:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:01.833 11:23:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:15:01.833 11:23:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3781688 00:15:01.833 11:23:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3781688 /var/tmp/spdk2.sock 00:15:01.833 11:23:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:15:01.833 11:23:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3781688 ']' 00:15:01.833 11:23:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:01.833 11:23:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:01.833 11:23:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:01.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:01.833 11:23:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:01.833 11:23:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.833 [2024-06-10 11:23:26.917883] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:01.833 [2024-06-10 11:23:26.917947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3781688 ] 00:15:02.092 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.092 [2024-06-10 11:23:27.047601] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:02.092 [2024-06-10 11:23:27.047624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:02.092 [2024-06-10 11:23:27.190048] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.092 [2024-06-10 11:23:27.190165] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.092 [2024-06-10 11:23:27.190166] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.028 [2024-06-10 11:23:27.844658] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3781555 has claimed it. 00:15:03.028 request: 00:15:03.028 { 00:15:03.028 "method": "framework_enable_cpumask_locks", 00:15:03.028 "req_id": 1 00:15:03.028 } 00:15:03.028 Got JSON-RPC error response 00:15:03.028 response: 00:15:03.028 { 00:15:03.028 "code": -32603, 00:15:03.028 "message": "Failed to claim CPU core: 2" 00:15:03.028 } 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3781555 /var/tmp/spdk.sock 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3781555 ']' 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:03.028 11:23:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.028 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:03.028 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:15:03.028 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3781688 /var/tmp/spdk2.sock 00:15:03.028 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3781688 ']' 00:15:03.028 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:03.028 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:03.028 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:03.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:03.028 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:03.028 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.348 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:03.349 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:15:03.349 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:15:03.349 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:03.349 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:03.349 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:03.349 00:15:03.349 real 0m2.424s 00:15:03.349 user 0m1.121s 00:15:03.349 sys 0m0.227s 00:15:03.349 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:03.349 11:23:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.349 ************************************ 00:15:03.349 END TEST locking_overlapped_coremask_via_rpc 00:15:03.349 ************************************ 00:15:03.349 11:23:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:15:03.349 11:23:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3781555 ]] 00:15:03.349 11:23:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3781555 00:15:03.349 11:23:28 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3781555 ']' 00:15:03.349 11:23:28 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3781555 00:15:03.349 11:23:28 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:15:03.349 11:23:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:03.349 11:23:28 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3781555 00:15:03.349 11:23:28 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:03.349 11:23:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:03.349 11:23:28 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3781555' 00:15:03.349 killing process with pid 3781555 00:15:03.349 11:23:28 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 3781555 00:15:03.349 11:23:28 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 3781555 00:15:03.694 11:23:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3781688 ]] 00:15:03.694 11:23:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3781688 00:15:03.694 11:23:28 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3781688 ']' 00:15:03.694 11:23:28 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3781688 00:15:03.694 11:23:28 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:15:03.694 11:23:28 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:03.694 11:23:28 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3781688 00:15:03.953 11:23:28 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:15:03.953 11:23:28 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:15:03.953 11:23:28 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3781688' 00:15:03.953 killing process with pid 3781688 00:15:03.953 11:23:28 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 3781688 00:15:03.953 11:23:28 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 3781688 00:15:04.211 11:23:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:15:04.211 11:23:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:15:04.211 11:23:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3781555 ]] 00:15:04.211 11:23:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3781555 00:15:04.211 11:23:29 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3781555 ']' 00:15:04.211 11:23:29 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3781555 00:15:04.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3781555) - No such process 00:15:04.211 11:23:29 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 3781555 is not found' 00:15:04.211 Process with pid 3781555 is not found 00:15:04.211 11:23:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3781688 ]] 00:15:04.211 11:23:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3781688 00:15:04.211 11:23:29 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3781688 ']' 00:15:04.211 11:23:29 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3781688 00:15:04.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3781688) - No such process 00:15:04.211 11:23:29 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 3781688 is not found' 00:15:04.211 Process with pid 3781688 is not found 00:15:04.211 11:23:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:15:04.211 00:15:04.211 real 0m19.800s 00:15:04.211 user 0m33.675s 00:15:04.211 sys 0m6.768s 00:15:04.211 11:23:29 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:04.211 11:23:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:04.211 ************************************ 00:15:04.211 END TEST cpu_locks 00:15:04.211 ************************************ 00:15:04.211 00:15:04.211 real 0m47.280s 00:15:04.211 user 1m29.403s 00:15:04.211 sys 0m11.673s 00:15:04.211 11:23:29 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:04.211 11:23:29 event -- common/autotest_common.sh@10 -- # set +x 00:15:04.211 ************************************ 00:15:04.211 END TEST event 00:15:04.211 ************************************ 00:15:04.211 11:23:29 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:15:04.211 11:23:29 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:04.211 11:23:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:04.211 11:23:29 -- common/autotest_common.sh@10 -- # set +x 00:15:04.211 ************************************ 00:15:04.211 START TEST thread 00:15:04.211 ************************************ 00:15:04.211 11:23:29 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:15:04.470 * Looking for test storage... 00:15:04.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:15:04.470 11:23:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:04.470 11:23:29 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:15:04.470 11:23:29 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:04.470 11:23:29 thread -- common/autotest_common.sh@10 -- # set +x 00:15:04.470 ************************************ 00:15:04.470 START TEST thread_poller_perf 00:15:04.470 ************************************ 00:15:04.470 11:23:29 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:04.470 [2024-06-10 11:23:29.437175] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:04.470 [2024-06-10 11:23:29.437245] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782205 ] 00:15:04.470 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.470 [2024-06-10 11:23:29.556943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.729 [2024-06-10 11:23:29.641309] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.729 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:15:05.674 ====================================== 00:15:05.674 busy:2510224846 (cyc) 00:15:05.674 total_run_count: 285000 00:15:05.674 tsc_hz: 2500000000 (cyc) 00:15:05.674 ====================================== 00:15:05.674 poller_cost: 8807 (cyc), 3522 (nsec) 00:15:05.674 00:15:05.674 real 0m1.311s 00:15:05.674 user 0m1.179s 00:15:05.674 sys 0m0.125s 00:15:05.674 11:23:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:05.674 11:23:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:15:05.674 ************************************ 00:15:05.674 END TEST thread_poller_perf 00:15:05.674 ************************************ 00:15:05.674 11:23:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:05.674 11:23:30 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:15:05.674 11:23:30 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:05.674 11:23:30 thread -- common/autotest_common.sh@10 -- # set +x 00:15:05.933 ************************************ 00:15:05.933 START TEST thread_poller_perf 00:15:05.933 ************************************ 00:15:05.933 11:23:30 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:05.933 [2024-06-10 11:23:30.825498] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:05.934 [2024-06-10 11:23:30.825598] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782485 ] 00:15:05.934 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.934 [2024-06-10 11:23:30.946336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.934 [2024-06-10 11:23:31.027158] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.934 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:15:07.310 ====================================== 00:15:07.310 busy:2502634790 (cyc) 00:15:07.310 total_run_count: 3820000 00:15:07.310 tsc_hz: 2500000000 (cyc) 00:15:07.310 ====================================== 00:15:07.310 poller_cost: 655 (cyc), 262 (nsec) 00:15:07.310 00:15:07.310 real 0m1.299s 00:15:07.310 user 0m1.169s 00:15:07.310 sys 0m0.124s 00:15:07.310 11:23:32 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:07.310 11:23:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:15:07.310 ************************************ 00:15:07.310 END TEST thread_poller_perf 00:15:07.310 ************************************ 00:15:07.310 11:23:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:15:07.310 00:15:07.310 real 0m2.878s 00:15:07.310 user 0m2.453s 00:15:07.310 sys 0m0.437s 00:15:07.310 11:23:32 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:07.310 11:23:32 thread -- common/autotest_common.sh@10 -- # set +x 00:15:07.310 ************************************ 00:15:07.310 END TEST thread 00:15:07.310 ************************************ 00:15:07.310 11:23:32 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:15:07.310 11:23:32 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:07.310 11:23:32 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:07.310 11:23:32 -- common/autotest_common.sh@10 -- # set +x 00:15:07.310 ************************************ 00:15:07.310 START TEST accel 00:15:07.310 ************************************ 00:15:07.310 11:23:32 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:15:07.310 * Looking for test storage... 00:15:07.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:15:07.310 11:23:32 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:15:07.310 11:23:32 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:15:07.310 11:23:32 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:07.310 11:23:32 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3782806 00:15:07.310 11:23:32 accel -- accel/accel.sh@63 -- # waitforlisten 3782806 00:15:07.310 11:23:32 accel -- common/autotest_common.sh@830 -- # '[' -z 3782806 ']' 00:15:07.310 11:23:32 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.310 11:23:32 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:15:07.310 11:23:32 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:07.310 11:23:32 accel -- accel/accel.sh@61 -- # build_accel_config 00:15:07.310 11:23:32 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.310 11:23:32 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:07.310 11:23:32 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:07.310 11:23:32 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:07.310 11:23:32 accel -- common/autotest_common.sh@10 -- # set +x 00:15:07.310 11:23:32 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:07.310 11:23:32 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:07.310 11:23:32 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:07.310 11:23:32 accel -- accel/accel.sh@40 -- # local IFS=, 00:15:07.310 11:23:32 accel -- accel/accel.sh@41 -- # jq -r . 00:15:07.310 [2024-06-10 11:23:32.386684] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:07.310 [2024-06-10 11:23:32.386748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782806 ] 00:15:07.569 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.569 [2024-06-10 11:23:32.506765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.569 [2024-06-10 11:23:32.588403] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.505 11:23:33 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:08.505 11:23:33 accel -- common/autotest_common.sh@863 -- # return 0 00:15:08.505 11:23:33 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:15:08.505 11:23:33 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:15:08.505 11:23:33 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:15:08.505 11:23:33 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:15:08.505 11:23:33 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:15:08.505 11:23:33 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:15:08.505 11:23:33 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:08.505 11:23:33 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:15:08.505 11:23:33 accel -- common/autotest_common.sh@10 -- # set +x 00:15:08.505 11:23:33 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:08.505 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.505 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.505 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.505 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.505 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.505 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.505 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.505 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.505 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.505 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.505 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.505 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.505 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.505 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.505 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.505 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.506 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.506 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.506 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.506 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.506 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.506 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.506 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.506 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.506 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.506 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.506 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.506 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.506 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.506 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.506 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.506 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.506 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.506 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.506 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.506 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.506 11:23:33 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # IFS== 00:15:08.506 11:23:33 accel -- accel/accel.sh@72 -- # read -r opc module 00:15:08.506 11:23:33 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:15:08.506 11:23:33 accel -- accel/accel.sh@75 -- # killprocess 3782806 00:15:08.506 11:23:33 accel -- common/autotest_common.sh@949 -- # '[' -z 3782806 ']' 00:15:08.506 11:23:33 accel -- common/autotest_common.sh@953 -- # kill -0 3782806 00:15:08.506 11:23:33 accel -- common/autotest_common.sh@954 -- # uname 00:15:08.506 11:23:33 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:08.506 11:23:33 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3782806 00:15:08.506 11:23:33 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:08.506 11:23:33 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:08.506 11:23:33 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3782806' 00:15:08.506 killing process with pid 3782806 00:15:08.506 11:23:33 accel -- common/autotest_common.sh@968 -- # kill 3782806 00:15:08.506 11:23:33 accel -- common/autotest_common.sh@973 -- # wait 3782806 00:15:08.765 11:23:33 accel -- accel/accel.sh@76 -- # trap - ERR 00:15:08.765 11:23:33 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:15:08.765 11:23:33 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:08.765 11:23:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:08.765 11:23:33 accel -- common/autotest_common.sh@10 -- # set +x 00:15:08.765 11:23:33 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:15:08.765 11:23:33 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:15:08.765 11:23:33 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:15:08.765 11:23:33 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:08.765 11:23:33 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:08.765 11:23:33 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:08.765 11:23:33 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:08.765 11:23:33 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:08.765 11:23:33 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:15:08.765 11:23:33 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:15:08.765 11:23:33 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:08.765 11:23:33 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:15:08.765 11:23:33 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:15:08.765 11:23:33 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:15:08.765 11:23:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:08.765 11:23:33 accel -- common/autotest_common.sh@10 -- # set +x 00:15:08.765 ************************************ 00:15:08.765 START TEST accel_missing_filename 00:15:08.765 ************************************ 00:15:08.765 11:23:33 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:15:08.765 11:23:33 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:15:08.765 11:23:33 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:15:08.765 11:23:33 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:15:08.765 11:23:33 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:08.765 11:23:33 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:15:08.765 11:23:33 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:08.765 11:23:33 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:15:08.765 11:23:33 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:15:08.765 11:23:33 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:15:08.765 11:23:33 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:08.765 11:23:33 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:08.765 11:23:33 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:08.765 11:23:33 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:08.765 11:23:33 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:08.765 11:23:33 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:15:08.765 11:23:33 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:15:09.023 [2024-06-10 11:23:33.886593] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:09.024 [2024-06-10 11:23:33.886653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783116 ] 00:15:09.024 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.024 [2024-06-10 11:23:34.004563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.024 [2024-06-10 11:23:34.085979] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.282 [2024-06-10 11:23:34.130364] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:09.282 [2024-06-10 11:23:34.192818] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:15:09.282 A filename is required. 00:15:09.282 11:23:34 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:15:09.282 11:23:34 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:09.282 11:23:34 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:15:09.282 11:23:34 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:15:09.282 11:23:34 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:15:09.282 11:23:34 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:09.282 00:15:09.282 real 0m0.409s 00:15:09.282 user 0m0.282s 00:15:09.282 sys 0m0.170s 00:15:09.282 11:23:34 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:09.282 11:23:34 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:15:09.282 ************************************ 00:15:09.282 END TEST accel_missing_filename 00:15:09.282 ************************************ 00:15:09.282 11:23:34 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:09.282 11:23:34 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:15:09.282 11:23:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:09.282 11:23:34 accel -- common/autotest_common.sh@10 -- # set +x 00:15:09.282 ************************************ 00:15:09.282 START TEST accel_compress_verify 00:15:09.282 ************************************ 00:15:09.282 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:09.282 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:15:09.282 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:09.282 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:15:09.282 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:09.282 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:15:09.282 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:09.282 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:09.282 11:23:34 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:09.282 11:23:34 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:15:09.282 11:23:34 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:09.282 11:23:34 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:09.282 11:23:34 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:09.282 11:23:34 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:09.282 11:23:34 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:09.282 11:23:34 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:15:09.282 11:23:34 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:15:09.282 [2024-06-10 11:23:34.373726] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:09.282 [2024-06-10 11:23:34.373790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783139 ] 00:15:09.541 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.541 [2024-06-10 11:23:34.496151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.541 [2024-06-10 11:23:34.577435] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.541 [2024-06-10 11:23:34.621914] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:09.800 [2024-06-10 11:23:34.684447] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:15:09.800 00:15:09.800 Compression does not support the verify option, aborting. 00:15:09.800 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:15:09.800 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:09.800 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:15:09.800 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:15:09.800 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:15:09.800 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:09.800 00:15:09.800 real 0m0.418s 00:15:09.800 user 0m0.280s 00:15:09.800 sys 0m0.175s 00:15:09.800 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:09.800 11:23:34 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:15:09.801 ************************************ 00:15:09.801 END TEST accel_compress_verify 00:15:09.801 ************************************ 00:15:09.801 11:23:34 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:15:09.801 11:23:34 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:15:09.801 11:23:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:09.801 11:23:34 accel -- common/autotest_common.sh@10 -- # set +x 00:15:09.801 ************************************ 00:15:09.801 START TEST accel_wrong_workload 00:15:09.801 ************************************ 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:15:09.801 11:23:34 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:15:09.801 11:23:34 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:15:09.801 11:23:34 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:09.801 11:23:34 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:09.801 11:23:34 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:09.801 11:23:34 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:09.801 11:23:34 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:09.801 11:23:34 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:15:09.801 11:23:34 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:15:09.801 Unsupported workload type: foobar 00:15:09.801 [2024-06-10 11:23:34.874220] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:15:09.801 accel_perf options: 00:15:09.801 [-h help message] 00:15:09.801 [-q queue depth per core] 00:15:09.801 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:15:09.801 [-T number of threads per core 00:15:09.801 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:15:09.801 [-t time in seconds] 00:15:09.801 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:15:09.801 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:15:09.801 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:15:09.801 [-l for compress/decompress workloads, name of uncompressed input file 00:15:09.801 [-S for crc32c workload, use this seed value (default 0) 00:15:09.801 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:15:09.801 [-f for fill workload, use this BYTE value (default 255) 00:15:09.801 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:15:09.801 [-y verify result if this switch is on] 00:15:09.801 [-a tasks to allocate per core (default: same value as -q)] 00:15:09.801 Can be used to spread operations across a wider range of memory. 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:09.801 00:15:09.801 real 0m0.036s 00:15:09.801 user 0m0.023s 00:15:09.801 sys 0m0.013s 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:09.801 11:23:34 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:15:09.801 ************************************ 00:15:09.801 END TEST accel_wrong_workload 00:15:09.801 ************************************ 00:15:09.801 Error: writing output failed: Broken pipe 00:15:10.060 11:23:34 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:15:10.060 11:23:34 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:15:10.060 11:23:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:10.060 11:23:34 accel -- common/autotest_common.sh@10 -- # set +x 00:15:10.060 ************************************ 00:15:10.060 START TEST accel_negative_buffers 00:15:10.060 ************************************ 00:15:10.060 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:15:10.060 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:15:10.061 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:15:10.061 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:15:10.061 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:10.061 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:15:10.061 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:10.061 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:15:10.061 11:23:34 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:15:10.061 11:23:34 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:15:10.061 11:23:34 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:10.061 11:23:34 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:10.061 11:23:34 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:10.061 11:23:34 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:10.061 11:23:34 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:10.061 11:23:34 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:15:10.061 11:23:34 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:15:10.061 -x option must be non-negative. 00:15:10.061 [2024-06-10 11:23:34.986944] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:15:10.061 accel_perf options: 00:15:10.061 [-h help message] 00:15:10.061 [-q queue depth per core] 00:15:10.061 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:15:10.061 [-T number of threads per core 00:15:10.061 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:15:10.061 [-t time in seconds] 00:15:10.061 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:15:10.061 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:15:10.061 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:15:10.061 [-l for compress/decompress workloads, name of uncompressed input file 00:15:10.061 [-S for crc32c workload, use this seed value (default 0) 00:15:10.061 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:15:10.061 [-f for fill workload, use this BYTE value (default 255) 00:15:10.061 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:15:10.061 [-y verify result if this switch is on] 00:15:10.061 [-a tasks to allocate per core (default: same value as -q)] 00:15:10.061 Can be used to spread operations across a wider range of memory. 00:15:10.061 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:15:10.061 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:10.061 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:10.061 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:10.061 00:15:10.061 real 0m0.035s 00:15:10.061 user 0m0.019s 00:15:10.061 sys 0m0.016s 00:15:10.061 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:10.061 11:23:34 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:15:10.061 ************************************ 00:15:10.061 END TEST accel_negative_buffers 00:15:10.061 ************************************ 00:15:10.061 Error: writing output failed: Broken pipe 00:15:10.061 11:23:35 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:15:10.061 11:23:35 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:15:10.061 11:23:35 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:10.061 11:23:35 accel -- common/autotest_common.sh@10 -- # set +x 00:15:10.061 ************************************ 00:15:10.061 START TEST accel_crc32c 00:15:10.061 ************************************ 00:15:10.061 11:23:35 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:15:10.061 11:23:35 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:15:10.061 [2024-06-10 11:23:35.097793] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:10.061 [2024-06-10 11:23:35.097847] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783456 ] 00:15:10.061 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.320 [2024-06-10 11:23:35.217369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.320 [2024-06-10 11:23:35.298683] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:10.320 11:23:35 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:10.321 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:10.321 11:23:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:15:11.698 11:23:36 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:11.698 00:15:11.698 real 0m1.421s 00:15:11.698 user 0m1.259s 00:15:11.698 sys 0m0.177s 00:15:11.698 11:23:36 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:11.698 11:23:36 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:15:11.698 ************************************ 00:15:11.698 END TEST accel_crc32c 00:15:11.698 ************************************ 00:15:11.698 11:23:36 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:15:11.698 11:23:36 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:15:11.698 11:23:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:11.698 11:23:36 accel -- common/autotest_common.sh@10 -- # set +x 00:15:11.698 ************************************ 00:15:11.698 START TEST accel_crc32c_C2 00:15:11.698 ************************************ 00:15:11.698 11:23:36 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:15:11.698 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:15:11.698 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:15:11.698 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.698 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.698 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:15:11.698 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:15:11.699 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:15:11.699 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:11.699 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:11.699 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:11.699 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:11.699 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:11.699 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:15:11.699 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:15:11.699 [2024-06-10 11:23:36.593030] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:11.699 [2024-06-10 11:23:36.593102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3783742 ] 00:15:11.699 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.699 [2024-06-10 11:23:36.714382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.699 [2024-06-10 11:23:36.795265] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:11.958 11:23:36 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:12.893 00:15:12.893 real 0m1.422s 00:15:12.893 user 0m1.262s 00:15:12.893 sys 0m0.173s 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:12.893 11:23:37 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:15:12.893 ************************************ 00:15:12.893 END TEST accel_crc32c_C2 00:15:12.893 ************************************ 00:15:13.151 11:23:38 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:15:13.151 11:23:38 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:15:13.151 11:23:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:13.151 11:23:38 accel -- common/autotest_common.sh@10 -- # set +x 00:15:13.151 ************************************ 00:15:13.151 START TEST accel_copy 00:15:13.151 ************************************ 00:15:13.151 11:23:38 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:15:13.151 11:23:38 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:15:13.151 11:23:38 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:15:13.151 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.151 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.151 11:23:38 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:15:13.151 11:23:38 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:15:13.151 11:23:38 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:15:13.151 11:23:38 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:13.151 11:23:38 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:13.151 11:23:38 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:13.151 11:23:38 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:13.152 11:23:38 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:13.152 11:23:38 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:15:13.152 11:23:38 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:15:13.152 [2024-06-10 11:23:38.094999] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:13.152 [2024-06-10 11:23:38.095055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784024 ] 00:15:13.152 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.152 [2024-06-10 11:23:38.214646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.410 [2024-06-10 11:23:38.296538] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.410 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:13.411 11:23:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:14.787 11:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:14.787 11:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:14.787 11:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:14.787 11:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:14.787 11:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:15:14.788 11:23:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:14.788 00:15:14.788 real 0m1.421s 00:15:14.788 user 0m1.260s 00:15:14.788 sys 0m0.174s 00:15:14.788 11:23:39 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:14.788 11:23:39 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:15:14.788 ************************************ 00:15:14.788 END TEST accel_copy 00:15:14.788 ************************************ 00:15:14.788 11:23:39 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:15:14.788 11:23:39 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:15:14.788 11:23:39 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:14.788 11:23:39 accel -- common/autotest_common.sh@10 -- # set +x 00:15:14.788 ************************************ 00:15:14.788 START TEST accel_fill 00:15:14.788 ************************************ 00:15:14.788 11:23:39 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:15:14.788 [2024-06-10 11:23:39.592925] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:14.788 [2024-06-10 11:23:39.592981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784303 ] 00:15:14.788 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.788 [2024-06-10 11:23:39.711489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.788 [2024-06-10 11:23:39.792454] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:14.788 11:23:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:15:16.165 11:23:40 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:16.165 00:15:16.165 real 0m1.419s 00:15:16.165 user 0m1.263s 00:15:16.165 sys 0m0.170s 00:15:16.165 11:23:40 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:16.165 11:23:40 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:15:16.165 ************************************ 00:15:16.165 END TEST accel_fill 00:15:16.165 ************************************ 00:15:16.165 11:23:41 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:15:16.165 11:23:41 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:15:16.165 11:23:41 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:16.165 11:23:41 accel -- common/autotest_common.sh@10 -- # set +x 00:15:16.165 ************************************ 00:15:16.165 START TEST accel_copy_crc32c 00:15:16.165 ************************************ 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:15:16.165 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:15:16.165 [2024-06-10 11:23:41.098162] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:16.165 [2024-06-10 11:23:41.098221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784558 ] 00:15:16.165 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.165 [2024-06-10 11:23:41.218848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.425 [2024-06-10 11:23:41.301638] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.425 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:16.426 11:23:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:17.802 00:15:17.802 real 0m1.428s 00:15:17.802 user 0m1.261s 00:15:17.802 sys 0m0.180s 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:17.802 11:23:42 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:15:17.802 ************************************ 00:15:17.802 END TEST accel_copy_crc32c 00:15:17.802 ************************************ 00:15:17.802 11:23:42 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:15:17.802 11:23:42 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:15:17.802 11:23:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:17.802 11:23:42 accel -- common/autotest_common.sh@10 -- # set +x 00:15:17.802 ************************************ 00:15:17.802 START TEST accel_copy_crc32c_C2 00:15:17.802 ************************************ 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:15:17.802 [2024-06-10 11:23:42.602522] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:17.802 [2024-06-10 11:23:42.602607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784809 ] 00:15:17.802 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.802 [2024-06-10 11:23:42.723489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.802 [2024-06-10 11:23:42.804450] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.802 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:17.803 11:23:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:19.179 00:15:19.179 real 0m1.425s 00:15:19.179 user 0m1.271s 00:15:19.179 sys 0m0.169s 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:19.179 11:23:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:15:19.179 ************************************ 00:15:19.179 END TEST accel_copy_crc32c_C2 00:15:19.179 ************************************ 00:15:19.179 11:23:44 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:15:19.179 11:23:44 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:15:19.179 11:23:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:19.179 11:23:44 accel -- common/autotest_common.sh@10 -- # set +x 00:15:19.179 ************************************ 00:15:19.179 START TEST accel_dualcast 00:15:19.179 ************************************ 00:15:19.179 11:23:44 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:15:19.179 11:23:44 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:15:19.179 [2024-06-10 11:23:44.109704] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:19.180 [2024-06-10 11:23:44.109760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785071 ] 00:15:19.180 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.180 [2024-06-10 11:23:44.229917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.438 [2024-06-10 11:23:44.312846] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.438 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.439 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:19.439 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.439 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.439 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:19.439 11:23:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:19.439 11:23:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:19.439 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:19.439 11:23:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:15:20.815 11:23:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:20.815 00:15:20.815 real 0m1.425s 00:15:20.815 user 0m1.257s 00:15:20.815 sys 0m0.180s 00:15:20.815 11:23:45 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:20.815 11:23:45 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:15:20.815 ************************************ 00:15:20.815 END TEST accel_dualcast 00:15:20.815 ************************************ 00:15:20.815 11:23:45 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:15:20.815 11:23:45 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:15:20.815 11:23:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:20.815 11:23:45 accel -- common/autotest_common.sh@10 -- # set +x 00:15:20.815 ************************************ 00:15:20.815 START TEST accel_compare 00:15:20.815 ************************************ 00:15:20.815 11:23:45 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:15:20.815 [2024-06-10 11:23:45.615778] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:20.815 [2024-06-10 11:23:45.615836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785334 ] 00:15:20.815 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.815 [2024-06-10 11:23:45.735827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.815 [2024-06-10 11:23:45.817485] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.815 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:20.816 11:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:15:22.193 11:23:47 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:22.193 00:15:22.193 real 0m1.424s 00:15:22.193 user 0m1.258s 00:15:22.193 sys 0m0.179s 00:15:22.193 11:23:47 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:22.193 11:23:47 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:15:22.193 ************************************ 00:15:22.193 END TEST accel_compare 00:15:22.193 ************************************ 00:15:22.193 11:23:47 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:15:22.193 11:23:47 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:15:22.193 11:23:47 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:22.193 11:23:47 accel -- common/autotest_common.sh@10 -- # set +x 00:15:22.193 ************************************ 00:15:22.193 START TEST accel_xor 00:15:22.193 ************************************ 00:15:22.193 11:23:47 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:15:22.193 11:23:47 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:15:22.193 [2024-06-10 11:23:47.121461] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:22.193 [2024-06-10 11:23:47.121539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785599 ] 00:15:22.193 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.193 [2024-06-10 11:23:47.242528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.452 [2024-06-10 11:23:47.324378] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:15:22.452 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:22.453 11:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:23.830 00:15:23.830 real 0m1.426s 00:15:23.830 user 0m1.259s 00:15:23.830 sys 0m0.181s 00:15:23.830 11:23:48 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:23.830 11:23:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:15:23.830 ************************************ 00:15:23.830 END TEST accel_xor 00:15:23.830 ************************************ 00:15:23.830 11:23:48 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:15:23.830 11:23:48 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:15:23.830 11:23:48 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:23.830 11:23:48 accel -- common/autotest_common.sh@10 -- # set +x 00:15:23.830 ************************************ 00:15:23.830 START TEST accel_xor 00:15:23.830 ************************************ 00:15:23.830 11:23:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:15:23.830 [2024-06-10 11:23:48.627152] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:23.830 [2024-06-10 11:23:48.627228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785848 ] 00:15:23.830 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.830 [2024-06-10 11:23:48.750169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.830 [2024-06-10 11:23:48.833161] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.830 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:23.831 11:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:15:25.208 11:23:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:25.208 00:15:25.208 real 0m1.429s 00:15:25.208 user 0m1.267s 00:15:25.208 sys 0m0.176s 00:15:25.208 11:23:50 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:25.208 11:23:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:15:25.208 ************************************ 00:15:25.208 END TEST accel_xor 00:15:25.208 ************************************ 00:15:25.208 11:23:50 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:15:25.208 11:23:50 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:15:25.208 11:23:50 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:25.208 11:23:50 accel -- common/autotest_common.sh@10 -- # set +x 00:15:25.208 ************************************ 00:15:25.208 START TEST accel_dif_verify 00:15:25.208 ************************************ 00:15:25.208 11:23:50 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:15:25.208 11:23:50 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:15:25.208 [2024-06-10 11:23:50.140117] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:25.208 [2024-06-10 11:23:50.140172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786124 ] 00:15:25.208 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.208 [2024-06-10 11:23:50.247681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.467 [2024-06-10 11:23:50.330016] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:15:25.467 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:25.468 11:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:15:26.845 11:23:51 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:26.845 00:15:26.845 real 0m1.412s 00:15:26.845 user 0m1.255s 00:15:26.845 sys 0m0.172s 00:15:26.845 11:23:51 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:26.845 11:23:51 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:15:26.845 ************************************ 00:15:26.845 END TEST accel_dif_verify 00:15:26.845 ************************************ 00:15:26.845 11:23:51 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:15:26.845 11:23:51 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:15:26.845 11:23:51 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:26.845 11:23:51 accel -- common/autotest_common.sh@10 -- # set +x 00:15:26.845 ************************************ 00:15:26.845 START TEST accel_dif_generate 00:15:26.845 ************************************ 00:15:26.845 11:23:51 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:15:26.845 [2024-06-10 11:23:51.638182] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:26.845 [2024-06-10 11:23:51.638243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786376 ] 00:15:26.845 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.845 [2024-06-10 11:23:51.759049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.845 [2024-06-10 11:23:51.841954] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:26.845 11:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:15:28.222 11:23:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:15:28.223 11:23:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:28.223 11:23:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:15:28.223 11:23:53 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:28.223 00:15:28.223 real 0m1.424s 00:15:28.223 user 0m1.262s 00:15:28.223 sys 0m0.176s 00:15:28.223 11:23:53 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:28.223 11:23:53 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:15:28.223 ************************************ 00:15:28.223 END TEST accel_dif_generate 00:15:28.223 ************************************ 00:15:28.223 11:23:53 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:15:28.223 11:23:53 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:15:28.223 11:23:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:28.223 11:23:53 accel -- common/autotest_common.sh@10 -- # set +x 00:15:28.223 ************************************ 00:15:28.223 START TEST accel_dif_generate_copy 00:15:28.223 ************************************ 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:15:28.223 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:15:28.223 [2024-06-10 11:23:53.144206] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:28.223 [2024-06-10 11:23:53.144268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786644 ] 00:15:28.223 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.223 [2024-06-10 11:23:53.266327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.482 [2024-06-10 11:23:53.349179] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:28.482 11:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:29.860 00:15:29.860 real 0m1.428s 00:15:29.860 user 0m1.265s 00:15:29.860 sys 0m0.176s 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:29.860 11:23:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:15:29.860 ************************************ 00:15:29.860 END TEST accel_dif_generate_copy 00:15:29.860 ************************************ 00:15:29.860 11:23:54 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:15:29.860 11:23:54 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:29.860 11:23:54 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:15:29.860 11:23:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:29.860 11:23:54 accel -- common/autotest_common.sh@10 -- # set +x 00:15:29.860 ************************************ 00:15:29.860 START TEST accel_comp 00:15:29.860 ************************************ 00:15:29.860 11:23:54 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:15:29.860 11:23:54 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:15:29.860 [2024-06-10 11:23:54.630663] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:29.860 [2024-06-10 11:23:54.630705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786916 ] 00:15:29.860 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.861 [2024-06-10 11:23:54.737675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.861 [2024-06-10 11:23:54.821681] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:29.861 11:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:15:31.236 11:23:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:31.236 00:15:31.236 real 0m1.399s 00:15:31.236 user 0m1.251s 00:15:31.236 sys 0m0.162s 00:15:31.236 11:23:56 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:31.236 11:23:56 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:15:31.236 ************************************ 00:15:31.236 END TEST accel_comp 00:15:31.236 ************************************ 00:15:31.236 11:23:56 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:31.236 11:23:56 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:15:31.236 11:23:56 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:31.236 11:23:56 accel -- common/autotest_common.sh@10 -- # set +x 00:15:31.236 ************************************ 00:15:31.236 START TEST accel_decomp 00:15:31.236 ************************************ 00:15:31.236 11:23:56 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:15:31.236 11:23:56 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:15:31.236 [2024-06-10 11:23:56.113224] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:31.236 [2024-06-10 11:23:56.113278] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787200 ] 00:15:31.236 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.236 [2024-06-10 11:23:56.233979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.236 [2024-06-10 11:23:56.315604] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:15:31.494 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:31.495 11:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:32.432 11:23:57 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:32.432 00:15:32.432 real 0m1.427s 00:15:32.432 user 0m1.268s 00:15:32.432 sys 0m0.172s 00:15:32.432 11:23:57 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:32.432 11:23:57 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:15:32.432 ************************************ 00:15:32.432 END TEST accel_decomp 00:15:32.432 ************************************ 00:15:32.691 11:23:57 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:15:32.691 11:23:57 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:15:32.691 11:23:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:32.691 11:23:57 accel -- common/autotest_common.sh@10 -- # set +x 00:15:32.691 ************************************ 00:15:32.691 START TEST accel_decomp_full 00:15:32.691 ************************************ 00:15:32.691 11:23:57 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:15:32.691 11:23:57 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:15:32.691 [2024-06-10 11:23:57.626797] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:32.691 [2024-06-10 11:23:57.626858] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787479 ] 00:15:32.691 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.691 [2024-06-10 11:23:57.750108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.950 [2024-06-10 11:23:57.834768] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:15:32.950 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:32.951 11:23:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:34.327 11:23:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:34.327 00:15:34.327 real 0m1.444s 00:15:34.327 user 0m1.279s 00:15:34.327 sys 0m0.178s 00:15:34.327 11:23:59 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:34.327 11:23:59 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:15:34.327 ************************************ 00:15:34.327 END TEST accel_decomp_full 00:15:34.327 ************************************ 00:15:34.327 11:23:59 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:15:34.327 11:23:59 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:15:34.327 11:23:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:34.327 11:23:59 accel -- common/autotest_common.sh@10 -- # set +x 00:15:34.327 ************************************ 00:15:34.327 START TEST accel_decomp_mcore 00:15:34.327 ************************************ 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:15:34.327 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:15:34.327 [2024-06-10 11:23:59.132876] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:34.327 [2024-06-10 11:23:59.132948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787774 ] 00:15:34.327 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.327 [2024-06-10 11:23:59.255278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.327 [2024-06-10 11:23:59.340428] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.328 [2024-06-10 11:23:59.340522] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.328 [2024-06-10 11:23:59.340641] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.328 [2024-06-10 11:23:59.340642] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:34.328 11:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:35.706 00:15:35.706 real 0m1.439s 00:15:35.706 user 0m4.616s 00:15:35.706 sys 0m0.180s 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:35.706 11:24:00 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:15:35.706 ************************************ 00:15:35.706 END TEST accel_decomp_mcore 00:15:35.706 ************************************ 00:15:35.706 11:24:00 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:15:35.706 11:24:00 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:15:35.706 11:24:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:35.706 11:24:00 accel -- common/autotest_common.sh@10 -- # set +x 00:15:35.706 ************************************ 00:15:35.706 START TEST accel_decomp_full_mcore 00:15:35.706 ************************************ 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:15:35.706 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:15:35.706 [2024-06-10 11:24:00.620055] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:35.706 [2024-06-10 11:24:00.620095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3788098 ] 00:15:35.706 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.706 [2024-06-10 11:24:00.724430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.706 [2024-06-10 11:24:00.809293] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.706 [2024-06-10 11:24:00.809388] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.706 [2024-06-10 11:24:00.809496] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.706 [2024-06-10 11:24:00.809498] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.965 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:35.966 11:24:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:15:37.342 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:37.343 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:37.343 11:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:37.343 00:15:37.343 real 0m1.420s 00:15:37.343 user 0m4.656s 00:15:37.343 sys 0m0.161s 00:15:37.343 11:24:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:37.343 11:24:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:15:37.343 ************************************ 00:15:37.343 END TEST accel_decomp_full_mcore 00:15:37.343 ************************************ 00:15:37.343 11:24:02 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:15:37.343 11:24:02 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:15:37.343 11:24:02 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:37.343 11:24:02 accel -- common/autotest_common.sh@10 -- # set +x 00:15:37.343 ************************************ 00:15:37.343 START TEST accel_decomp_mthread 00:15:37.343 ************************************ 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:15:37.343 [2024-06-10 11:24:02.131747] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:37.343 [2024-06-10 11:24:02.131807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3788472 ] 00:15:37.343 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.343 [2024-06-10 11:24:02.249237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.343 [2024-06-10 11:24:02.329897] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:37.343 11:24:02 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:38.807 00:15:38.807 real 0m1.424s 00:15:38.807 user 0m1.268s 00:15:38.807 sys 0m0.169s 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:38.807 11:24:03 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:15:38.807 ************************************ 00:15:38.807 END TEST accel_decomp_mthread 00:15:38.807 ************************************ 00:15:38.807 11:24:03 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:15:38.807 11:24:03 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:15:38.807 11:24:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:38.807 11:24:03 accel -- common/autotest_common.sh@10 -- # set +x 00:15:38.807 ************************************ 00:15:38.807 START TEST accel_decomp_full_mthread 00:15:38.807 ************************************ 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:15:38.807 [2024-06-10 11:24:03.629969] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:38.807 [2024-06-10 11:24:03.630025] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3788753 ] 00:15:38.807 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.807 [2024-06-10 11:24:03.749679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.807 [2024-06-10 11:24:03.831388] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.807 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:38.808 11:24:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:40.186 00:15:40.186 real 0m1.448s 00:15:40.186 user 0m1.296s 00:15:40.186 sys 0m0.164s 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:40.186 11:24:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:15:40.186 ************************************ 00:15:40.186 END TEST accel_decomp_full_mthread 00:15:40.186 ************************************ 00:15:40.186 11:24:05 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:15:40.186 11:24:05 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:15:40.186 11:24:05 accel -- accel/accel.sh@137 -- # build_accel_config 00:15:40.186 11:24:05 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:15:40.186 11:24:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:40.186 11:24:05 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:15:40.186 11:24:05 accel -- common/autotest_common.sh@10 -- # set +x 00:15:40.186 11:24:05 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:15:40.186 11:24:05 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:15:40.186 11:24:05 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:15:40.186 11:24:05 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:15:40.186 11:24:05 accel -- accel/accel.sh@40 -- # local IFS=, 00:15:40.186 11:24:05 accel -- accel/accel.sh@41 -- # jq -r . 00:15:40.186 ************************************ 00:15:40.186 START TEST accel_dif_functional_tests 00:15:40.186 ************************************ 00:15:40.186 11:24:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:15:40.186 [2024-06-10 11:24:05.184598] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:40.186 [2024-06-10 11:24:05.184653] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789352 ] 00:15:40.186 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.445 [2024-06-10 11:24:05.302591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:40.445 [2024-06-10 11:24:05.391546] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.445 [2024-06-10 11:24:05.391661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.445 [2024-06-10 11:24:05.391667] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.445 00:15:40.445 00:15:40.445 CUnit - A unit testing framework for C - Version 2.1-3 00:15:40.445 http://cunit.sourceforge.net/ 00:15:40.445 00:15:40.445 00:15:40.445 Suite: accel_dif 00:15:40.445 Test: verify: DIF generated, GUARD check ...passed 00:15:40.445 Test: verify: DIF generated, APPTAG check ...passed 00:15:40.445 Test: verify: DIF generated, REFTAG check ...passed 00:15:40.445 Test: verify: DIF not generated, GUARD check ...[2024-06-10 11:24:05.465163] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:15:40.445 passed 00:15:40.445 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 11:24:05.465232] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:15:40.445 passed 00:15:40.445 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 11:24:05.465262] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:15:40.445 passed 00:15:40.445 Test: verify: APPTAG correct, APPTAG check ...passed 00:15:40.445 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 11:24:05.465324] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:15:40.445 passed 00:15:40.445 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:15:40.445 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:15:40.445 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:15:40.445 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 11:24:05.465463] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:15:40.445 passed 00:15:40.445 Test: verify copy: DIF generated, GUARD check ...passed 00:15:40.445 Test: verify copy: DIF generated, APPTAG check ...passed 00:15:40.445 Test: verify copy: DIF generated, REFTAG check ...passed 00:15:40.445 Test: verify copy: DIF not generated, GUARD check ...[2024-06-10 11:24:05.465617] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:15:40.445 passed 00:15:40.445 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 11:24:05.465650] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:15:40.445 passed 00:15:40.445 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 11:24:05.465680] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:15:40.445 passed 00:15:40.445 Test: generate copy: DIF generated, GUARD check ...passed 00:15:40.445 Test: generate copy: DIF generated, APTTAG check ...passed 00:15:40.445 Test: generate copy: DIF generated, REFTAG check ...passed 00:15:40.445 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:15:40.445 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:15:40.445 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:15:40.445 Test: generate copy: iovecs-len validate ...[2024-06-10 11:24:05.465910] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:15:40.445 passed 00:15:40.445 Test: generate copy: buffer alignment validate ...passed 00:15:40.445 00:15:40.445 Run Summary: Type Total Ran Passed Failed Inactive 00:15:40.445 suites 1 1 n/a 0 0 00:15:40.445 tests 26 26 26 0 0 00:15:40.445 asserts 115 115 115 0 n/a 00:15:40.445 00:15:40.445 Elapsed time = 0.002 seconds 00:15:40.704 00:15:40.704 real 0m0.510s 00:15:40.704 user 0m0.676s 00:15:40.704 sys 0m0.206s 00:15:40.704 11:24:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:40.704 11:24:05 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:15:40.704 ************************************ 00:15:40.704 END TEST accel_dif_functional_tests 00:15:40.704 ************************************ 00:15:40.704 00:15:40.704 real 0m33.470s 00:15:40.704 user 0m35.686s 00:15:40.704 sys 0m6.070s 00:15:40.704 11:24:05 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:40.704 11:24:05 accel -- common/autotest_common.sh@10 -- # set +x 00:15:40.704 ************************************ 00:15:40.704 END TEST accel 00:15:40.704 ************************************ 00:15:40.704 11:24:05 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:15:40.704 11:24:05 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:40.704 11:24:05 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:40.704 11:24:05 -- common/autotest_common.sh@10 -- # set +x 00:15:40.704 ************************************ 00:15:40.704 START TEST accel_rpc 00:15:40.705 ************************************ 00:15:40.705 11:24:05 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:15:40.964 * Looking for test storage... 00:15:40.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:15:40.964 11:24:05 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:40.964 11:24:05 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:40.964 11:24:05 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3789673 00:15:40.964 11:24:05 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3789673 00:15:40.964 11:24:05 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 3789673 ']' 00:15:40.964 11:24:05 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.964 11:24:05 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:40.964 11:24:05 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.964 11:24:05 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:40.964 11:24:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.964 [2024-06-10 11:24:05.924463] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:40.964 [2024-06-10 11:24:05.924536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3789673 ] 00:15:40.964 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.964 [2024-06-10 11:24:06.044837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.222 [2024-06-10 11:24:06.129978] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.790 11:24:06 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:41.790 11:24:06 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:15:41.790 11:24:06 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:15:41.790 11:24:06 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:15:41.790 11:24:06 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:15:41.790 11:24:06 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:15:41.790 11:24:06 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:15:41.790 11:24:06 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:41.790 11:24:06 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:41.790 11:24:06 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.790 ************************************ 00:15:41.790 START TEST accel_assign_opcode 00:15:41.790 ************************************ 00:15:41.790 11:24:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:15:41.790 11:24:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:15:41.790 11:24:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.790 11:24:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:15:41.790 [2024-06-10 11:24:06.860245] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:15:41.790 11:24:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.790 11:24:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:15:41.790 11:24:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.790 11:24:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:15:41.790 [2024-06-10 11:24:06.868252] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:15:41.790 11:24:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.790 11:24:06 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:15:41.790 11:24:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.790 11:24:06 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:15:42.049 11:24:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:42.049 11:24:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:15:42.049 11:24:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:42.049 11:24:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:15:42.049 11:24:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:15:42.049 11:24:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:15:42.049 11:24:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:42.049 software 00:15:42.049 00:15:42.049 real 0m0.251s 00:15:42.049 user 0m0.050s 00:15:42.049 sys 0m0.011s 00:15:42.049 11:24:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:42.049 11:24:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:15:42.049 ************************************ 00:15:42.049 END TEST accel_assign_opcode 00:15:42.049 ************************************ 00:15:42.049 11:24:07 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3789673 00:15:42.049 11:24:07 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 3789673 ']' 00:15:42.049 11:24:07 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 3789673 00:15:42.049 11:24:07 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:15:42.308 11:24:07 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:42.308 11:24:07 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3789673 00:15:42.308 11:24:07 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:42.308 11:24:07 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:42.308 11:24:07 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3789673' 00:15:42.308 killing process with pid 3789673 00:15:42.308 11:24:07 accel_rpc -- common/autotest_common.sh@968 -- # kill 3789673 00:15:42.308 11:24:07 accel_rpc -- common/autotest_common.sh@973 -- # wait 3789673 00:15:42.567 00:15:42.567 real 0m1.768s 00:15:42.567 user 0m1.858s 00:15:42.567 sys 0m0.539s 00:15:42.567 11:24:07 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:42.567 11:24:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.567 ************************************ 00:15:42.567 END TEST accel_rpc 00:15:42.567 ************************************ 00:15:42.567 11:24:07 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:15:42.567 11:24:07 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:42.567 11:24:07 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:42.567 11:24:07 -- common/autotest_common.sh@10 -- # set +x 00:15:42.567 ************************************ 00:15:42.567 START TEST app_cmdline 00:15:42.567 ************************************ 00:15:42.567 11:24:07 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:15:42.827 * Looking for test storage... 00:15:42.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:42.827 11:24:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:15:42.827 11:24:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3790130 00:15:42.827 11:24:07 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:15:42.827 11:24:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3790130 00:15:42.827 11:24:07 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 3790130 ']' 00:15:42.827 11:24:07 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.827 11:24:07 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:42.827 11:24:07 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.827 11:24:07 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:42.827 11:24:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:15:42.827 [2024-06-10 11:24:07.782817] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:15:42.827 [2024-06-10 11:24:07.782888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3790130 ] 00:15:42.827 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.827 [2024-06-10 11:24:07.901771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.085 [2024-06-10 11:24:07.986758] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.652 11:24:08 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:43.652 11:24:08 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:15:43.652 11:24:08 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:15:43.910 { 00:15:43.910 "version": "SPDK v24.09-pre git sha1 1e8a0c991", 00:15:43.910 "fields": { 00:15:43.910 "major": 24, 00:15:43.910 "minor": 9, 00:15:43.910 "patch": 0, 00:15:43.910 "suffix": "-pre", 00:15:43.910 "commit": "1e8a0c991" 00:15:43.910 } 00:15:43.910 } 00:15:43.910 11:24:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:15:43.910 11:24:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:15:43.910 11:24:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:15:43.910 11:24:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:15:43.910 11:24:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:15:43.910 11:24:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:15:43.910 11:24:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:43.910 11:24:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:15:43.910 11:24:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:15:43.910 11:24:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:43.910 11:24:08 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:44.169 request: 00:15:44.169 { 00:15:44.169 "method": "env_dpdk_get_mem_stats", 00:15:44.169 "req_id": 1 00:15:44.169 } 00:15:44.169 Got JSON-RPC error response 00:15:44.169 response: 00:15:44.169 { 00:15:44.169 "code": -32601, 00:15:44.169 "message": "Method not found" 00:15:44.169 } 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:44.169 11:24:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3790130 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 3790130 ']' 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 3790130 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3790130 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3790130' 00:15:44.169 killing process with pid 3790130 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@968 -- # kill 3790130 00:15:44.169 11:24:09 app_cmdline -- common/autotest_common.sh@973 -- # wait 3790130 00:15:44.736 00:15:44.736 real 0m1.962s 00:15:44.736 user 0m2.383s 00:15:44.736 sys 0m0.576s 00:15:44.736 11:24:09 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:44.736 11:24:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:15:44.736 ************************************ 00:15:44.736 END TEST app_cmdline 00:15:44.736 ************************************ 00:15:44.736 11:24:09 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:15:44.736 11:24:09 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:44.736 11:24:09 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:44.736 11:24:09 -- common/autotest_common.sh@10 -- # set +x 00:15:44.736 ************************************ 00:15:44.736 START TEST version 00:15:44.736 ************************************ 00:15:44.736 11:24:09 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:15:44.736 * Looking for test storage... 00:15:44.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:44.736 11:24:09 version -- app/version.sh@17 -- # get_header_version major 00:15:44.736 11:24:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:15:44.736 11:24:09 version -- app/version.sh@14 -- # cut -f2 00:15:44.736 11:24:09 version -- app/version.sh@14 -- # tr -d '"' 00:15:44.736 11:24:09 version -- app/version.sh@17 -- # major=24 00:15:44.736 11:24:09 version -- app/version.sh@18 -- # get_header_version minor 00:15:44.736 11:24:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:15:44.736 11:24:09 version -- app/version.sh@14 -- # cut -f2 00:15:44.736 11:24:09 version -- app/version.sh@14 -- # tr -d '"' 00:15:44.736 11:24:09 version -- app/version.sh@18 -- # minor=9 00:15:44.736 11:24:09 version -- app/version.sh@19 -- # get_header_version patch 00:15:44.736 11:24:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:15:44.736 11:24:09 version -- app/version.sh@14 -- # cut -f2 00:15:44.736 11:24:09 version -- app/version.sh@14 -- # tr -d '"' 00:15:44.736 11:24:09 version -- app/version.sh@19 -- # patch=0 00:15:44.736 11:24:09 version -- app/version.sh@20 -- # get_header_version suffix 00:15:44.736 11:24:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:15:44.736 11:24:09 version -- app/version.sh@14 -- # cut -f2 00:15:44.736 11:24:09 version -- app/version.sh@14 -- # tr -d '"' 00:15:44.736 11:24:09 version -- app/version.sh@20 -- # suffix=-pre 00:15:44.736 11:24:09 version -- app/version.sh@22 -- # version=24.9 00:15:44.736 11:24:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:15:44.736 11:24:09 version -- app/version.sh@28 -- # version=24.9rc0 00:15:44.736 11:24:09 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:44.736 11:24:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:15:44.995 11:24:09 version -- app/version.sh@30 -- # py_version=24.9rc0 00:15:44.995 11:24:09 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:15:44.995 00:15:44.995 real 0m0.185s 00:15:44.995 user 0m0.093s 00:15:44.995 sys 0m0.139s 00:15:44.995 11:24:09 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:44.995 11:24:09 version -- common/autotest_common.sh@10 -- # set +x 00:15:44.995 ************************************ 00:15:44.995 END TEST version 00:15:44.995 ************************************ 00:15:44.995 11:24:09 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:15:44.995 11:24:09 -- spdk/autotest.sh@198 -- # uname -s 00:15:44.995 11:24:09 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:15:44.995 11:24:09 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:15:44.995 11:24:09 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:15:44.995 11:24:09 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:15:44.995 11:24:09 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:44.995 11:24:09 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:44.995 11:24:09 -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:44.995 11:24:09 -- common/autotest_common.sh@10 -- # set +x 00:15:44.995 11:24:09 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:44.995 11:24:09 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:15:44.995 11:24:09 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:15:44.995 11:24:09 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:15:44.995 11:24:09 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:15:44.995 11:24:09 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:15:44.995 11:24:09 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:15:44.995 11:24:09 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:44.995 11:24:09 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:44.995 11:24:09 -- common/autotest_common.sh@10 -- # set +x 00:15:44.995 ************************************ 00:15:44.995 START TEST nvmf_tcp 00:15:44.995 ************************************ 00:15:44.995 11:24:09 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:15:44.995 * Looking for test storage... 00:15:44.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.995 11:24:10 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.254 11:24:10 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.254 11:24:10 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.254 11:24:10 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.254 11:24:10 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.254 11:24:10 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.254 11:24:10 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.254 11:24:10 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:15:45.254 11:24:10 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:15:45.254 11:24:10 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:45.254 11:24:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:15:45.254 11:24:10 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:45.254 11:24:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:45.254 11:24:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:45.254 11:24:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.254 ************************************ 00:15:45.254 START TEST nvmf_example 00:15:45.254 ************************************ 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:45.254 * Looking for test storage... 00:15:45.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.254 11:24:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:55.236 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:55.236 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:55.236 Found net devices under 0000:af:00.0: cvl_0_0 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:55.236 Found net devices under 0000:af:00.1: cvl_0_1 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.236 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:55.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:15:55.237 00:15:55.237 --- 10.0.0.2 ping statistics --- 00:15:55.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.237 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:15:55.237 00:15:55.237 --- 10.0.0.1 ping statistics --- 00:15:55.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.237 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3794673 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3794673 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 3794673 ']' 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:55.237 11:24:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:55.237 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:55.237 11:24:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:55.237 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.274 Initializing NVMe Controllers 00:16:05.274 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:05.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:05.274 Initialization complete. Launching workers. 00:16:05.274 ======================================================== 00:16:05.274 Latency(us) 00:16:05.274 Device Information : IOPS MiB/s Average min max 00:16:05.274 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15686.60 61.28 4080.99 903.94 20015.19 00:16:05.274 ======================================================== 00:16:05.274 Total : 15686.60 61.28 4080.99 903.94 20015.19 00:16:05.274 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.274 rmmod nvme_tcp 00:16:05.274 rmmod nvme_fabrics 00:16:05.274 rmmod nvme_keyring 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3794673 ']' 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3794673 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 3794673 ']' 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 3794673 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3794673 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3794673' 00:16:05.274 killing process with pid 3794673 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 3794673 00:16:05.274 11:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 3794673 00:16:05.533 nvmf threads initialize successfully 00:16:05.533 bdev subsystem init successfully 00:16:05.533 created a nvmf target service 00:16:05.533 create targets's poll groups done 00:16:05.533 all subsystems of target started 00:16:05.533 nvmf target is running 00:16:05.533 all subsystems of target stopped 00:16:05.533 destroy targets's poll groups done 00:16:05.533 destroyed the nvmf target service 00:16:05.533 bdev subsystem finish successfully 00:16:05.533 nvmf threads destroy successfully 00:16:05.533 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:05.533 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:05.533 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:05.533 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.533 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:05.533 11:24:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.533 11:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.533 11:24:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.440 11:24:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:07.700 11:24:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:16:07.700 11:24:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:07.700 11:24:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:07.700 00:16:07.700 real 0m22.443s 00:16:07.700 user 0m46.326s 00:16:07.700 sys 0m8.482s 00:16:07.700 11:24:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:07.700 11:24:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:07.700 ************************************ 00:16:07.700 END TEST nvmf_example 00:16:07.700 ************************************ 00:16:07.700 11:24:32 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:07.700 11:24:32 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:07.700 11:24:32 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:07.700 11:24:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:07.700 ************************************ 00:16:07.700 START TEST nvmf_filesystem 00:16:07.700 ************************************ 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:07.700 * Looking for test storage... 00:16:07.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:16:07.700 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:16:07.701 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:16:07.962 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:16:07.962 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:07.962 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:07.962 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:16:07.962 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:07.962 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:07.962 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:07.962 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:07.962 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:07.962 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:07.962 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:07.962 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:16:07.962 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:07.962 #define SPDK_CONFIG_H 00:16:07.962 #define SPDK_CONFIG_APPS 1 00:16:07.962 #define SPDK_CONFIG_ARCH native 00:16:07.962 #undef SPDK_CONFIG_ASAN 00:16:07.962 #undef SPDK_CONFIG_AVAHI 00:16:07.962 #undef SPDK_CONFIG_CET 00:16:07.962 #define SPDK_CONFIG_COVERAGE 1 00:16:07.962 #define SPDK_CONFIG_CROSS_PREFIX 00:16:07.962 #undef SPDK_CONFIG_CRYPTO 00:16:07.962 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:07.962 #undef SPDK_CONFIG_CUSTOMOCF 00:16:07.962 #undef SPDK_CONFIG_DAOS 00:16:07.962 #define SPDK_CONFIG_DAOS_DIR 00:16:07.962 #define SPDK_CONFIG_DEBUG 1 00:16:07.962 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:07.962 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:16:07.962 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:07.962 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:07.962 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:07.962 #undef SPDK_CONFIG_DPDK_UADK 00:16:07.962 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:16:07.962 #define SPDK_CONFIG_EXAMPLES 1 00:16:07.962 #undef SPDK_CONFIG_FC 00:16:07.962 #define SPDK_CONFIG_FC_PATH 00:16:07.962 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:07.962 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:07.962 #undef SPDK_CONFIG_FUSE 00:16:07.962 #undef SPDK_CONFIG_FUZZER 00:16:07.962 #define SPDK_CONFIG_FUZZER_LIB 00:16:07.962 #undef SPDK_CONFIG_GOLANG 00:16:07.962 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:07.962 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:07.962 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:07.962 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:07.962 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:07.962 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:07.962 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:07.962 #define SPDK_CONFIG_IDXD 1 00:16:07.962 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:07.963 #undef SPDK_CONFIG_IPSEC_MB 00:16:07.963 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:07.963 #define SPDK_CONFIG_ISAL 1 00:16:07.963 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:07.963 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:07.963 #define SPDK_CONFIG_LIBDIR 00:16:07.963 #undef SPDK_CONFIG_LTO 00:16:07.963 #define SPDK_CONFIG_MAX_LCORES 00:16:07.963 #define SPDK_CONFIG_NVME_CUSE 1 00:16:07.963 #undef SPDK_CONFIG_OCF 00:16:07.963 #define SPDK_CONFIG_OCF_PATH 00:16:07.963 #define SPDK_CONFIG_OPENSSL_PATH 00:16:07.963 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:07.963 #define SPDK_CONFIG_PGO_DIR 00:16:07.963 #undef SPDK_CONFIG_PGO_USE 00:16:07.963 #define SPDK_CONFIG_PREFIX /usr/local 00:16:07.963 #undef SPDK_CONFIG_RAID5F 00:16:07.963 #undef SPDK_CONFIG_RBD 00:16:07.963 #define SPDK_CONFIG_RDMA 1 00:16:07.963 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:07.963 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:07.963 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:07.963 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:07.963 #define SPDK_CONFIG_SHARED 1 00:16:07.963 #undef SPDK_CONFIG_SMA 00:16:07.963 #define SPDK_CONFIG_TESTS 1 00:16:07.963 #undef SPDK_CONFIG_TSAN 00:16:07.963 #define SPDK_CONFIG_UBLK 1 00:16:07.963 #define SPDK_CONFIG_UBSAN 1 00:16:07.963 #undef SPDK_CONFIG_UNIT_TESTS 00:16:07.963 #undef SPDK_CONFIG_URING 00:16:07.963 #define SPDK_CONFIG_URING_PATH 00:16:07.963 #undef SPDK_CONFIG_URING_ZNS 00:16:07.963 #undef SPDK_CONFIG_USDT 00:16:07.963 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:07.963 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:07.963 #define SPDK_CONFIG_VFIO_USER 1 00:16:07.963 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:07.963 #define SPDK_CONFIG_VHOST 1 00:16:07.963 #define SPDK_CONFIG_VIRTIO 1 00:16:07.963 #undef SPDK_CONFIG_VTUNE 00:16:07.963 #define SPDK_CONFIG_VTUNE_DIR 00:16:07.963 #define SPDK_CONFIG_WERROR 1 00:16:07.963 #define SPDK_CONFIG_WPDK_DIR 00:16:07.963 #undef SPDK_CONFIG_XNVME 00:16:07.963 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:07.963 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:07.964 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3797155 ]] 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3797155 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.sQGPFl 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.sQGPFl/tests/target /tmp/spdk.sQGPFl 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956952576 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4327477248 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=50964598784 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742280704 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10777681920 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30866427904 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871138304 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12338741248 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348456960 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9715712 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30869413888 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871142400 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1728512 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6174220288 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174224384 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:16:07.965 * Looking for test storage... 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=50964598784 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12992274432 00:16:07.965 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:16:07.966 11:24:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:17.986 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:17.986 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:17.986 Found net devices under 0000:af:00.0: cvl_0_0 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.986 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:17.987 Found net devices under 0000:af:00.1: cvl_0_1 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:17.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:16:17.987 00:16:17.987 --- 10.0.0.2 ping statistics --- 00:16:17.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.987 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:17.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:16:17.987 00:16:17.987 --- 10.0.0.1 ping statistics --- 00:16:17.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.987 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:17.987 ************************************ 00:16:17.987 START TEST nvmf_filesystem_no_in_capsule 00:16:17.987 ************************************ 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3801063 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3801063 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 3801063 ']' 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:17.987 11:24:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.987 [2024-06-10 11:24:41.732393] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:16:17.987 [2024-06-10 11:24:41.732458] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.987 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.987 [2024-06-10 11:24:41.860329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.987 [2024-06-10 11:24:41.952214] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.987 [2024-06-10 11:24:41.952260] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.987 [2024-06-10 11:24:41.952274] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.987 [2024-06-10 11:24:41.952286] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.987 [2024-06-10 11:24:41.952296] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.987 [2024-06-10 11:24:41.952352] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.987 [2024-06-10 11:24:41.952429] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.987 [2024-06-10 11:24:41.952538] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.987 [2024-06-10 11:24:41.952539] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.987 [2024-06-10 11:24:42.629923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.987 Malloc1 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.987 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.988 [2024-06-10 11:24:42.784269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:16:17.988 { 00:16:17.988 "name": "Malloc1", 00:16:17.988 "aliases": [ 00:16:17.988 "b265b628-ded1-4f65-ba61-796e71461a8a" 00:16:17.988 ], 00:16:17.988 "product_name": "Malloc disk", 00:16:17.988 "block_size": 512, 00:16:17.988 "num_blocks": 1048576, 00:16:17.988 "uuid": "b265b628-ded1-4f65-ba61-796e71461a8a", 00:16:17.988 "assigned_rate_limits": { 00:16:17.988 "rw_ios_per_sec": 0, 00:16:17.988 "rw_mbytes_per_sec": 0, 00:16:17.988 "r_mbytes_per_sec": 0, 00:16:17.988 "w_mbytes_per_sec": 0 00:16:17.988 }, 00:16:17.988 "claimed": true, 00:16:17.988 "claim_type": "exclusive_write", 00:16:17.988 "zoned": false, 00:16:17.988 "supported_io_types": { 00:16:17.988 "read": true, 00:16:17.988 "write": true, 00:16:17.988 "unmap": true, 00:16:17.988 "write_zeroes": true, 00:16:17.988 "flush": true, 00:16:17.988 "reset": true, 00:16:17.988 "compare": false, 00:16:17.988 "compare_and_write": false, 00:16:17.988 "abort": true, 00:16:17.988 "nvme_admin": false, 00:16:17.988 "nvme_io": false 00:16:17.988 }, 00:16:17.988 "memory_domains": [ 00:16:17.988 { 00:16:17.988 "dma_device_id": "system", 00:16:17.988 "dma_device_type": 1 00:16:17.988 }, 00:16:17.988 { 00:16:17.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.988 "dma_device_type": 2 00:16:17.988 } 00:16:17.988 ], 00:16:17.988 "driver_specific": {} 00:16:17.988 } 00:16:17.988 ]' 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:17.988 11:24:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:19.363 11:24:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:19.364 11:24:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:16:19.364 11:24:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.364 11:24:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:19.364 11:24:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:16:21.267 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:21.267 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:21.267 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:21.267 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:21.267 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.267 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:16:21.268 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:21.268 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:21.268 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:21.268 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:21.268 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:21.268 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:21.268 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:21.268 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:21.268 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:21.268 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:21.268 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:21.836 11:24:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:22.094 11:24:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:23.470 ************************************ 00:16:23.470 START TEST filesystem_ext4 00:16:23.470 ************************************ 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:23.470 mke2fs 1.46.5 (30-Dec-2021) 00:16:23.470 Discarding device blocks: 0/522240 done 00:16:23.470 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:23.470 Filesystem UUID: 828fc3fe-0bb5-4cf1-9c17-d0ff03e23650 00:16:23.470 Superblock backups stored on blocks: 00:16:23.470 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:23.470 00:16:23.470 Allocating group tables: 0/64 done 00:16:23.470 Writing inode tables: 0/64 done 00:16:23.470 Creating journal (8192 blocks): done 00:16:23.470 Writing superblocks and filesystem accounting information: 0/64 done 00:16:23.470 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:16:23.470 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3801063 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:23.729 00:16:23.729 real 0m0.499s 00:16:23.729 user 0m0.034s 00:16:23.729 sys 0m0.071s 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:23.729 ************************************ 00:16:23.729 END TEST filesystem_ext4 00:16:23.729 ************************************ 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:23.729 ************************************ 00:16:23.729 START TEST filesystem_btrfs 00:16:23.729 ************************************ 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:16:23.729 11:24:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:23.988 btrfs-progs v6.6.2 00:16:23.988 See https://btrfs.readthedocs.io for more information. 00:16:23.988 00:16:23.988 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:23.988 NOTE: several default settings have changed in version 5.15, please make sure 00:16:23.988 this does not affect your deployments: 00:16:23.988 - DUP for metadata (-m dup) 00:16:23.988 - enabled no-holes (-O no-holes) 00:16:23.988 - enabled free-space-tree (-R free-space-tree) 00:16:23.988 00:16:23.988 Label: (null) 00:16:23.988 UUID: 9172f832-40e0-4f02-a09c-2382583a4796 00:16:23.988 Node size: 16384 00:16:23.988 Sector size: 4096 00:16:23.988 Filesystem size: 510.00MiB 00:16:23.988 Block group profiles: 00:16:23.988 Data: single 8.00MiB 00:16:23.988 Metadata: DUP 32.00MiB 00:16:23.988 System: DUP 8.00MiB 00:16:23.988 SSD detected: yes 00:16:23.988 Zoned device: no 00:16:23.988 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:16:23.988 Runtime features: free-space-tree 00:16:23.988 Checksum: crc32c 00:16:23.988 Number of devices: 1 00:16:23.988 Devices: 00:16:23.988 ID SIZE PATH 00:16:23.988 1 510.00MiB /dev/nvme0n1p1 00:16:23.988 00:16:23.988 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:16:23.988 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:24.557 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:24.557 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:16:24.557 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:24.557 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:16:24.557 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:24.557 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:24.557 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3801063 00:16:24.557 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:24.557 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:24.557 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:24.557 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:24.557 00:16:24.557 real 0m0.829s 00:16:24.557 user 0m0.028s 00:16:24.557 sys 0m0.144s 00:16:24.557 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:24.557 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:24.557 ************************************ 00:16:24.557 END TEST filesystem_btrfs 00:16:24.557 ************************************ 00:16:24.815 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:16:24.815 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:16:24.815 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:24.815 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:24.815 ************************************ 00:16:24.815 START TEST filesystem_xfs 00:16:24.815 ************************************ 00:16:24.815 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:16:24.815 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:24.815 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:24.815 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:24.815 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:16:24.815 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:16:24.815 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:16:24.815 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:16:24.816 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:16:24.816 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:16:24.816 11:24:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:24.816 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:24.816 = sectsz=512 attr=2, projid32bit=1 00:16:24.816 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:24.816 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:24.816 data = bsize=4096 blocks=130560, imaxpct=25 00:16:24.816 = sunit=0 swidth=0 blks 00:16:24.816 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:24.816 log =internal log bsize=4096 blocks=16384, version=2 00:16:24.816 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:24.816 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:25.752 Discarding blocks...Done. 00:16:25.752 11:24:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:16:25.752 11:24:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:27.654 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:27.654 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:16:27.654 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:27.654 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:16:27.654 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:16:27.654 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:27.911 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3801063 00:16:27.911 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:27.911 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:27.911 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:27.911 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:27.911 00:16:27.912 real 0m3.064s 00:16:27.912 user 0m0.030s 00:16:27.912 sys 0m0.083s 00:16:27.912 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:27.912 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:27.912 ************************************ 00:16:27.912 END TEST filesystem_xfs 00:16:27.912 ************************************ 00:16:27.912 11:24:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:28.170 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:28.170 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:28.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3801063 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 3801063 ']' 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 3801063 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3801063 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3801063' 00:16:28.429 killing process with pid 3801063 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 3801063 00:16:28.429 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 3801063 00:16:28.689 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:28.689 00:16:28.689 real 0m12.127s 00:16:28.689 user 0m46.878s 00:16:28.689 sys 0m1.909s 00:16:28.689 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:28.689 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:28.689 ************************************ 00:16:28.689 END TEST nvmf_filesystem_no_in_capsule 00:16:28.689 ************************************ 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:28.949 ************************************ 00:16:28.949 START TEST nvmf_filesystem_in_capsule 00:16:28.949 ************************************ 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3803404 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3803404 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 3803404 ']' 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:28.949 11:24:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:28.949 [2024-06-10 11:24:53.940414] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:16:28.949 [2024-06-10 11:24:53.940468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.949 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.208 [2024-06-10 11:24:54.068025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.208 [2024-06-10 11:24:54.152499] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.208 [2024-06-10 11:24:54.152545] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.208 [2024-06-10 11:24:54.152559] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.208 [2024-06-10 11:24:54.152572] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.208 [2024-06-10 11:24:54.152586] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.208 [2024-06-10 11:24:54.152654] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.208 [2024-06-10 11:24:54.152682] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.208 [2024-06-10 11:24:54.152803] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.208 [2024-06-10 11:24:54.152803] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.775 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:29.775 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:16:29.775 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:29.775 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:29.775 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.034 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.034 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:30.034 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:16:30.034 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.034 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.034 [2024-06-10 11:24:54.905180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.034 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.034 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:30.034 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.034 11:24:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.034 Malloc1 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.034 [2024-06-10 11:24:55.068310] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.034 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:16:30.034 { 00:16:30.034 "name": "Malloc1", 00:16:30.034 "aliases": [ 00:16:30.034 "d9b24c4c-602a-416d-b0cb-2da48ab9a329" 00:16:30.034 ], 00:16:30.034 "product_name": "Malloc disk", 00:16:30.034 "block_size": 512, 00:16:30.034 "num_blocks": 1048576, 00:16:30.034 "uuid": "d9b24c4c-602a-416d-b0cb-2da48ab9a329", 00:16:30.034 "assigned_rate_limits": { 00:16:30.034 "rw_ios_per_sec": 0, 00:16:30.034 "rw_mbytes_per_sec": 0, 00:16:30.034 "r_mbytes_per_sec": 0, 00:16:30.034 "w_mbytes_per_sec": 0 00:16:30.034 }, 00:16:30.034 "claimed": true, 00:16:30.034 "claim_type": "exclusive_write", 00:16:30.034 "zoned": false, 00:16:30.034 "supported_io_types": { 00:16:30.034 "read": true, 00:16:30.034 "write": true, 00:16:30.034 "unmap": true, 00:16:30.034 "write_zeroes": true, 00:16:30.034 "flush": true, 00:16:30.034 "reset": true, 00:16:30.034 "compare": false, 00:16:30.034 "compare_and_write": false, 00:16:30.034 "abort": true, 00:16:30.034 "nvme_admin": false, 00:16:30.034 "nvme_io": false 00:16:30.034 }, 00:16:30.034 "memory_domains": [ 00:16:30.034 { 00:16:30.034 "dma_device_id": "system", 00:16:30.034 "dma_device_type": 1 00:16:30.034 }, 00:16:30.034 { 00:16:30.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.034 "dma_device_type": 2 00:16:30.035 } 00:16:30.035 ], 00:16:30.035 "driver_specific": {} 00:16:30.035 } 00:16:30.035 ]' 00:16:30.035 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:16:30.293 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:16:30.293 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:16:30.293 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:16:30.293 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:16:30.293 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:16:30.293 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:30.293 11:24:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:31.668 11:24:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:31.668 11:24:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:16:31.668 11:24:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:31.668 11:24:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:31.668 11:24:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:33.569 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:33.828 11:24:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:34.765 11:24:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:35.701 ************************************ 00:16:35.701 START TEST filesystem_in_capsule_ext4 00:16:35.701 ************************************ 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:16:35.701 11:25:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:35.701 mke2fs 1.46.5 (30-Dec-2021) 00:16:35.959 Discarding device blocks: 0/522240 done 00:16:35.959 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:35.959 Filesystem UUID: fefc608a-8e4b-4099-8b68-a625a1a7cb28 00:16:35.959 Superblock backups stored on blocks: 00:16:35.959 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:35.959 00:16:35.959 Allocating group tables: 0/64 done 00:16:35.959 Writing inode tables: 0/64 done 00:16:38.494 Creating journal (8192 blocks): done 00:16:39.321 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:16:39.321 00:16:39.321 11:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:16:39.321 11:25:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:40.257 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:40.257 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:16:40.257 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:40.257 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:16:40.257 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:40.257 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:40.257 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3803404 00:16:40.257 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:40.257 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:40.516 00:16:40.516 real 0m4.656s 00:16:40.516 user 0m0.026s 00:16:40.516 sys 0m0.086s 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:40.516 ************************************ 00:16:40.516 END TEST filesystem_in_capsule_ext4 00:16:40.516 ************************************ 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:40.516 ************************************ 00:16:40.516 START TEST filesystem_in_capsule_btrfs 00:16:40.516 ************************************ 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:16:40.516 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:40.775 btrfs-progs v6.6.2 00:16:40.775 See https://btrfs.readthedocs.io for more information. 00:16:40.775 00:16:40.775 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:40.775 NOTE: several default settings have changed in version 5.15, please make sure 00:16:40.775 this does not affect your deployments: 00:16:40.775 - DUP for metadata (-m dup) 00:16:40.775 - enabled no-holes (-O no-holes) 00:16:40.775 - enabled free-space-tree (-R free-space-tree) 00:16:40.775 00:16:40.775 Label: (null) 00:16:40.775 UUID: d65ab4dd-8899-49a4-88a2-f88d1cf8c4a5 00:16:40.775 Node size: 16384 00:16:40.775 Sector size: 4096 00:16:40.775 Filesystem size: 510.00MiB 00:16:40.775 Block group profiles: 00:16:40.775 Data: single 8.00MiB 00:16:40.775 Metadata: DUP 32.00MiB 00:16:40.775 System: DUP 8.00MiB 00:16:40.775 SSD detected: yes 00:16:40.775 Zoned device: no 00:16:40.775 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:16:40.775 Runtime features: free-space-tree 00:16:40.775 Checksum: crc32c 00:16:40.775 Number of devices: 1 00:16:40.776 Devices: 00:16:40.776 ID SIZE PATH 00:16:40.776 1 510.00MiB /dev/nvme0n1p1 00:16:40.776 00:16:40.776 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:16:40.776 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3803404 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:41.712 00:16:41.712 real 0m1.123s 00:16:41.712 user 0m0.033s 00:16:41.712 sys 0m0.146s 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:41.712 ************************************ 00:16:41.712 END TEST filesystem_in_capsule_btrfs 00:16:41.712 ************************************ 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:41.712 ************************************ 00:16:41.712 START TEST filesystem_in_capsule_xfs 00:16:41.712 ************************************ 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:16:41.712 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:41.712 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:41.712 = sectsz=512 attr=2, projid32bit=1 00:16:41.712 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:41.712 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:41.712 data = bsize=4096 blocks=130560, imaxpct=25 00:16:41.712 = sunit=0 swidth=0 blks 00:16:41.712 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:41.712 log =internal log bsize=4096 blocks=16384, version=2 00:16:41.712 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:41.712 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:43.090 Discarding blocks...Done. 00:16:43.090 11:25:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:16:43.090 11:25:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3803404 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:44.997 00:16:44.997 real 0m3.192s 00:16:44.997 user 0m0.022s 00:16:44.997 sys 0m0.094s 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:44.997 ************************************ 00:16:44.997 END TEST filesystem_in_capsule_xfs 00:16:44.997 ************************************ 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:44.997 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:44.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.997 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:44.997 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:16:44.997 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:44.997 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.997 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:44.997 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.257 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:16:45.257 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.257 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:45.257 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:45.257 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:45.257 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:45.257 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3803404 00:16:45.257 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 3803404 ']' 00:16:45.257 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 3803404 00:16:45.257 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:16:45.257 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:45.258 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3803404 00:16:45.258 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:45.258 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:45.258 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3803404' 00:16:45.258 killing process with pid 3803404 00:16:45.258 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 3803404 00:16:45.258 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 3803404 00:16:45.516 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:45.517 00:16:45.517 real 0m16.657s 00:16:45.517 user 1m4.803s 00:16:45.517 sys 0m2.166s 00:16:45.517 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:45.517 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:45.517 ************************************ 00:16:45.517 END TEST nvmf_filesystem_in_capsule 00:16:45.517 ************************************ 00:16:45.517 11:25:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:16:45.517 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:45.517 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:16:45.517 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:45.517 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:16:45.517 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:45.517 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:45.517 rmmod nvme_tcp 00:16:45.517 rmmod nvme_fabrics 00:16:45.776 rmmod nvme_keyring 00:16:45.776 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:45.776 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:16:45.776 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:16:45.776 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:45.776 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:45.776 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:45.776 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:45.776 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.776 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:45.776 11:25:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.776 11:25:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.776 11:25:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.774 11:25:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:47.774 00:16:47.774 real 0m40.046s 00:16:47.774 user 1m54.133s 00:16:47.774 sys 0m10.923s 00:16:47.774 11:25:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:47.774 11:25:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:47.774 ************************************ 00:16:47.774 END TEST nvmf_filesystem 00:16:47.774 ************************************ 00:16:47.774 11:25:12 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:47.774 11:25:12 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:47.774 11:25:12 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:47.774 11:25:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:47.774 ************************************ 00:16:47.774 START TEST nvmf_target_discovery 00:16:47.774 ************************************ 00:16:47.774 11:25:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:48.033 * Looking for test storage... 00:16:48.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:48.033 11:25:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.033 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:48.033 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:16:48.034 11:25:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.155 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:56.156 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:56.156 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:56.156 Found net devices under 0000:af:00.0: cvl_0_0 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:56.156 Found net devices under 0000:af:00.1: cvl_0_1 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.156 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.415 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.415 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.415 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:56.415 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.415 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.415 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.415 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:56.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:16:56.674 00:16:56.674 --- 10.0.0.2 ping statistics --- 00:16:56.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.674 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:16:56.674 00:16:56.674 --- 10.0.0.1 ping statistics --- 00:16:56.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.674 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.674 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3810982 00:16:56.675 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:56.675 11:25:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3810982 00:16:56.675 11:25:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 3810982 ']' 00:16:56.675 11:25:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.675 11:25:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:56.675 11:25:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.675 11:25:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:56.675 11:25:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.675 [2024-06-10 11:25:21.636195] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:16:56.675 [2024-06-10 11:25:21.636256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.675 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.675 [2024-06-10 11:25:21.763565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.933 [2024-06-10 11:25:21.850441] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.933 [2024-06-10 11:25:21.850488] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.933 [2024-06-10 11:25:21.850501] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.933 [2024-06-10 11:25:21.850514] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.933 [2024-06-10 11:25:21.850524] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.933 [2024-06-10 11:25:21.850595] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.933 [2024-06-10 11:25:21.850654] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.933 [2024-06-10 11:25:21.850765] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.933 [2024-06-10 11:25:21.850767] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.507 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:57.507 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:16:57.508 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.508 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:57.508 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.508 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.508 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:57.508 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.508 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.508 [2024-06-10 11:25:22.601795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.508 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.767 Null1 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.767 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.767 [2024-06-10 11:25:22.654102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 Null2 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 Null3 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 Null4 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:16:57.768 00:16:57.768 Discovery Log Number of Records 6, Generation counter 6 00:16:57.768 =====Discovery Log Entry 0====== 00:16:57.768 trtype: tcp 00:16:57.768 adrfam: ipv4 00:16:57.768 subtype: current discovery subsystem 00:16:57.768 treq: not required 00:16:57.768 portid: 0 00:16:57.768 trsvcid: 4420 00:16:57.768 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:57.768 traddr: 10.0.0.2 00:16:57.768 eflags: explicit discovery connections, duplicate discovery information 00:16:57.768 sectype: none 00:16:57.768 =====Discovery Log Entry 1====== 00:16:57.768 trtype: tcp 00:16:57.768 adrfam: ipv4 00:16:57.768 subtype: nvme subsystem 00:16:57.768 treq: not required 00:16:57.768 portid: 0 00:16:57.768 trsvcid: 4420 00:16:57.768 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:57.768 traddr: 10.0.0.2 00:16:57.768 eflags: none 00:16:57.768 sectype: none 00:16:57.768 =====Discovery Log Entry 2====== 00:16:57.768 trtype: tcp 00:16:57.768 adrfam: ipv4 00:16:57.768 subtype: nvme subsystem 00:16:57.768 treq: not required 00:16:57.768 portid: 0 00:16:57.768 trsvcid: 4420 00:16:57.768 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:57.768 traddr: 10.0.0.2 00:16:57.768 eflags: none 00:16:57.768 sectype: none 00:16:57.768 =====Discovery Log Entry 3====== 00:16:57.768 trtype: tcp 00:16:57.768 adrfam: ipv4 00:16:57.768 subtype: nvme subsystem 00:16:57.768 treq: not required 00:16:57.768 portid: 0 00:16:57.768 trsvcid: 4420 00:16:57.768 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:57.768 traddr: 10.0.0.2 00:16:57.768 eflags: none 00:16:57.768 sectype: none 00:16:57.768 =====Discovery Log Entry 4====== 00:16:57.768 trtype: tcp 00:16:57.768 adrfam: ipv4 00:16:57.768 subtype: nvme subsystem 00:16:57.768 treq: not required 00:16:57.768 portid: 0 00:16:57.768 trsvcid: 4420 00:16:57.768 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:57.768 traddr: 10.0.0.2 00:16:57.768 eflags: none 00:16:57.768 sectype: none 00:16:57.768 =====Discovery Log Entry 5====== 00:16:57.768 trtype: tcp 00:16:57.768 adrfam: ipv4 00:16:57.768 subtype: discovery subsystem referral 00:16:57.768 treq: not required 00:16:57.768 portid: 0 00:16:57.768 trsvcid: 4430 00:16:57.768 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:57.768 traddr: 10.0.0.2 00:16:57.768 eflags: none 00:16:57.768 sectype: none 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:57.768 Perform nvmf subsystem discovery via RPC 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:57.768 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:57.768 [ 00:16:57.768 { 00:16:57.768 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:57.768 "subtype": "Discovery", 00:16:57.768 "listen_addresses": [ 00:16:57.768 { 00:16:57.768 "trtype": "TCP", 00:16:57.768 "adrfam": "IPv4", 00:16:57.768 "traddr": "10.0.0.2", 00:16:57.768 "trsvcid": "4420" 00:16:57.768 } 00:16:57.768 ], 00:16:57.768 "allow_any_host": true, 00:16:57.768 "hosts": [] 00:16:57.768 }, 00:16:57.768 { 00:16:57.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.768 "subtype": "NVMe", 00:16:57.768 "listen_addresses": [ 00:16:57.768 { 00:16:57.768 "trtype": "TCP", 00:16:57.768 "adrfam": "IPv4", 00:16:57.768 "traddr": "10.0.0.2", 00:16:57.768 "trsvcid": "4420" 00:16:57.768 } 00:16:57.768 ], 00:16:57.768 "allow_any_host": true, 00:16:57.768 "hosts": [], 00:16:57.768 "serial_number": "SPDK00000000000001", 00:16:57.768 "model_number": "SPDK bdev Controller", 00:16:57.768 "max_namespaces": 32, 00:16:57.768 "min_cntlid": 1, 00:16:57.768 "max_cntlid": 65519, 00:16:57.768 "namespaces": [ 00:16:57.768 { 00:16:57.768 "nsid": 1, 00:16:57.768 "bdev_name": "Null1", 00:16:57.768 "name": "Null1", 00:16:57.768 "nguid": "487A7AF588E445A2A8D1374D5E4971BC", 00:16:57.768 "uuid": "487a7af5-88e4-45a2-a8d1-374d5e4971bc" 00:16:57.768 } 00:16:57.768 ] 00:16:57.768 }, 00:16:57.768 { 00:16:57.769 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:57.769 "subtype": "NVMe", 00:16:57.769 "listen_addresses": [ 00:16:57.769 { 00:16:57.769 "trtype": "TCP", 00:16:58.027 "adrfam": "IPv4", 00:16:58.027 "traddr": "10.0.0.2", 00:16:58.027 "trsvcid": "4420" 00:16:58.027 } 00:16:58.027 ], 00:16:58.027 "allow_any_host": true, 00:16:58.027 "hosts": [], 00:16:58.027 "serial_number": "SPDK00000000000002", 00:16:58.027 "model_number": "SPDK bdev Controller", 00:16:58.027 "max_namespaces": 32, 00:16:58.027 "min_cntlid": 1, 00:16:58.027 "max_cntlid": 65519, 00:16:58.027 "namespaces": [ 00:16:58.027 { 00:16:58.027 "nsid": 1, 00:16:58.027 "bdev_name": "Null2", 00:16:58.027 "name": "Null2", 00:16:58.027 "nguid": "561066C262C54E6E83F4E649C24753FD", 00:16:58.027 "uuid": "561066c2-62c5-4e6e-83f4-e649c24753fd" 00:16:58.027 } 00:16:58.027 ] 00:16:58.027 }, 00:16:58.027 { 00:16:58.027 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:58.027 "subtype": "NVMe", 00:16:58.027 "listen_addresses": [ 00:16:58.027 { 00:16:58.027 "trtype": "TCP", 00:16:58.027 "adrfam": "IPv4", 00:16:58.027 "traddr": "10.0.0.2", 00:16:58.027 "trsvcid": "4420" 00:16:58.027 } 00:16:58.027 ], 00:16:58.027 "allow_any_host": true, 00:16:58.027 "hosts": [], 00:16:58.027 "serial_number": "SPDK00000000000003", 00:16:58.027 "model_number": "SPDK bdev Controller", 00:16:58.027 "max_namespaces": 32, 00:16:58.027 "min_cntlid": 1, 00:16:58.027 "max_cntlid": 65519, 00:16:58.027 "namespaces": [ 00:16:58.027 { 00:16:58.027 "nsid": 1, 00:16:58.027 "bdev_name": "Null3", 00:16:58.027 "name": "Null3", 00:16:58.027 "nguid": "FB8900B154134C46A7A2EBB7DA7147A6", 00:16:58.027 "uuid": "fb8900b1-5413-4c46-a7a2-ebb7da7147a6" 00:16:58.027 } 00:16:58.027 ] 00:16:58.027 }, 00:16:58.027 { 00:16:58.027 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:58.027 "subtype": "NVMe", 00:16:58.027 "listen_addresses": [ 00:16:58.027 { 00:16:58.027 "trtype": "TCP", 00:16:58.028 "adrfam": "IPv4", 00:16:58.028 "traddr": "10.0.0.2", 00:16:58.028 "trsvcid": "4420" 00:16:58.028 } 00:16:58.028 ], 00:16:58.028 "allow_any_host": true, 00:16:58.028 "hosts": [], 00:16:58.028 "serial_number": "SPDK00000000000004", 00:16:58.028 "model_number": "SPDK bdev Controller", 00:16:58.028 "max_namespaces": 32, 00:16:58.028 "min_cntlid": 1, 00:16:58.028 "max_cntlid": 65519, 00:16:58.028 "namespaces": [ 00:16:58.028 { 00:16:58.028 "nsid": 1, 00:16:58.028 "bdev_name": "Null4", 00:16:58.028 "name": "Null4", 00:16:58.028 "nguid": "7CA93357A4E342A1A2BE4B27D22E3470", 00:16:58.028 "uuid": "7ca93357-a4e3-42a1-a2be-4b27d22e3470" 00:16:58.028 } 00:16:58.028 ] 00:16:58.028 } 00:16:58.028 ] 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.028 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:58.028 rmmod nvme_tcp 00:16:58.028 rmmod nvme_fabrics 00:16:58.028 rmmod nvme_keyring 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3810982 ']' 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3810982 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 3810982 ']' 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 3810982 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3810982 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3810982' 00:16:58.028 killing process with pid 3810982 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 3810982 00:16:58.028 11:25:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 3810982 00:16:58.287 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.287 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.287 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.287 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.287 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.287 11:25:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.287 11:25:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.287 11:25:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.825 11:25:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:00.825 00:17:00.825 real 0m12.583s 00:17:00.825 user 0m8.393s 00:17:00.825 sys 0m7.098s 00:17:00.825 11:25:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:00.825 11:25:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:00.825 ************************************ 00:17:00.825 END TEST nvmf_target_discovery 00:17:00.825 ************************************ 00:17:00.825 11:25:25 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:00.825 11:25:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:00.825 11:25:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:00.825 11:25:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:00.825 ************************************ 00:17:00.825 START TEST nvmf_referrals 00:17:00.825 ************************************ 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:00.825 * Looking for test storage... 00:17:00.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:17:00.825 11:25:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.805 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.805 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:17:10.805 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:10.805 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:10.805 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:10.806 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:10.806 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:10.806 Found net devices under 0000:af:00.0: cvl_0_0 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:10.806 Found net devices under 0000:af:00.1: cvl_0_1 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:17:10.806 00:17:10.806 --- 10.0.0.2 ping statistics --- 00:17:10.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.806 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:17:10.806 00:17:10.806 --- 10.0.0.1 ping statistics --- 00:17:10.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.806 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3815722 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3815722 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 3815722 ']' 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:10.806 11:25:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.806 [2024-06-10 11:25:34.547987] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:17:10.807 [2024-06-10 11:25:34.548071] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.807 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.807 [2024-06-10 11:25:34.678187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.807 [2024-06-10 11:25:34.764772] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.807 [2024-06-10 11:25:34.764817] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.807 [2024-06-10 11:25:34.764831] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.807 [2024-06-10 11:25:34.764843] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.807 [2024-06-10 11:25:34.764853] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.807 [2024-06-10 11:25:34.764911] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.807 [2024-06-10 11:25:34.764989] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.807 [2024-06-10 11:25:34.765101] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.807 [2024-06-10 11:25:34.765100] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.807 [2024-06-10 11:25:35.511714] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.807 [2024-06-10 11:25:35.527907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.807 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:11.066 11:25:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.066 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:17:11.066 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:17:11.066 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:11.066 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:11.066 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:11.066 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:11.066 11:25:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:11.066 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:11.067 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:11.326 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:17:11.326 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:11.326 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:17:11.326 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:17:11.326 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:11.326 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:11.326 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:11.584 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:17:11.584 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:17:11.584 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:17:11.584 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:11.584 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:11.584 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:11.584 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:11.585 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:11.844 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:17:11.844 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:11.844 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:17:11.844 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:17:11.844 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:11.844 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:11.844 11:25:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:12.103 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:12.361 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:17:12.361 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:17:12.361 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:12.361 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.362 rmmod nvme_tcp 00:17:12.362 rmmod nvme_fabrics 00:17:12.362 rmmod nvme_keyring 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3815722 ']' 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3815722 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 3815722 ']' 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 3815722 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:12.362 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3815722 00:17:12.621 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:12.621 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:12.621 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3815722' 00:17:12.621 killing process with pid 3815722 00:17:12.621 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 3815722 00:17:12.621 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 3815722 00:17:12.621 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:12.621 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:12.621 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:12.621 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.621 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:12.621 11:25:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.621 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.621 11:25:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.156 11:25:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:15.156 00:17:15.156 real 0m14.256s 00:17:15.156 user 0m14.846s 00:17:15.156 sys 0m7.720s 00:17:15.156 11:25:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:15.156 11:25:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:15.156 ************************************ 00:17:15.156 END TEST nvmf_referrals 00:17:15.156 ************************************ 00:17:15.156 11:25:39 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:15.156 11:25:39 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:15.156 11:25:39 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:15.156 11:25:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:15.156 ************************************ 00:17:15.156 START TEST nvmf_connect_disconnect 00:17:15.156 ************************************ 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:15.156 * Looking for test storage... 00:17:15.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:17:15.156 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.157 11:25:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.157 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:15.157 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:15.157 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.157 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:23.278 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:23.279 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:23.279 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:23.279 Found net devices under 0000:af:00.0: cvl_0_0 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:23.279 Found net devices under 0000:af:00.1: cvl_0_1 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:23.279 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:23.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:17:23.538 00:17:23.538 --- 10.0.0.2 ping statistics --- 00:17:23.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.538 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:23.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:17:23.538 00:17:23.538 --- 10.0.0.1 ping statistics --- 00:17:23.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.538 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:23.538 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:23.797 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:17:23.797 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:23.797 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:23.797 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:23.797 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3820787 00:17:23.797 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:23.797 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3820787 00:17:23.797 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 3820787 ']' 00:17:23.797 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.797 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:23.797 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.797 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:23.797 11:25:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:23.797 [2024-06-10 11:25:48.716907] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:17:23.797 [2024-06-10 11:25:48.716965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.797 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.797 [2024-06-10 11:25:48.844991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.056 [2024-06-10 11:25:48.929604] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.056 [2024-06-10 11:25:48.929648] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.056 [2024-06-10 11:25:48.929662] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.056 [2024-06-10 11:25:48.929675] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.056 [2024-06-10 11:25:48.929684] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.056 [2024-06-10 11:25:48.929743] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.056 [2024-06-10 11:25:48.929823] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.056 [2024-06-10 11:25:48.929937] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.056 [2024-06-10 11:25:48.929937] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:24.625 [2024-06-10 11:25:49.633972] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:24.625 [2024-06-10 11:25:49.690071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:17:24.625 11:25:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:17:28.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.095 11:26:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:42.095 11:26:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:42.095 11:26:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:42.095 11:26:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:17:42.095 11:26:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:42.095 11:26:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:17:42.095 11:26:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:42.095 11:26:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:42.095 rmmod nvme_tcp 00:17:42.095 rmmod nvme_fabrics 00:17:42.095 rmmod nvme_keyring 00:17:42.095 11:26:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:42.095 11:26:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:17:42.095 11:26:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:17:42.095 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3820787 ']' 00:17:42.095 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3820787 00:17:42.095 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 3820787 ']' 00:17:42.095 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 3820787 00:17:42.095 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:17:42.095 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:42.095 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3820787 00:17:42.095 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:42.095 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:42.095 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3820787' 00:17:42.095 killing process with pid 3820787 00:17:42.095 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 3820787 00:17:42.095 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 3820787 00:17:42.355 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.355 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.355 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.355 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.355 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.355 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.355 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.355 11:26:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.890 11:26:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:44.890 00:17:44.890 real 0m29.516s 00:17:44.890 user 1m14.522s 00:17:44.890 sys 0m8.866s 00:17:44.890 11:26:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:44.890 11:26:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:44.890 ************************************ 00:17:44.890 END TEST nvmf_connect_disconnect 00:17:44.890 ************************************ 00:17:44.890 11:26:09 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:44.890 11:26:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:44.890 11:26:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:44.890 11:26:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:44.890 ************************************ 00:17:44.890 START TEST nvmf_multitarget 00:17:44.890 ************************************ 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:44.890 * Looking for test storage... 00:17:44.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:17:44.890 11:26:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:54.869 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.869 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:54.870 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:54.870 Found net devices under 0000:af:00.0: cvl_0_0 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:54.870 Found net devices under 0000:af:00.1: cvl_0_1 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:54.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:17:54.870 00:17:54.870 --- 10.0.0.2 ping statistics --- 00:17:54.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.870 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:17:54.870 00:17:54.870 --- 10.0.0.1 ping statistics --- 00:17:54.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.870 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3828524 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3828524 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 3828524 ']' 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:54.870 11:26:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:54.870 [2024-06-10 11:26:18.562606] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:17:54.870 [2024-06-10 11:26:18.562664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.870 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.870 [2024-06-10 11:26:18.691057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.870 [2024-06-10 11:26:18.773608] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.870 [2024-06-10 11:26:18.773659] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.870 [2024-06-10 11:26:18.773673] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.870 [2024-06-10 11:26:18.773686] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.870 [2024-06-10 11:26:18.773697] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.870 [2024-06-10 11:26:18.773759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.870 [2024-06-10 11:26:18.773854] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.870 [2024-06-10 11:26:18.773954] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.870 [2024-06-10 11:26:18.773954] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.870 11:26:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:54.870 11:26:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:17:54.870 11:26:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:54.870 11:26:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:54.870 11:26:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:54.870 11:26:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.870 11:26:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:54.870 11:26:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:54.870 11:26:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:54.870 11:26:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:54.870 11:26:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:54.870 "nvmf_tgt_1" 00:17:54.870 11:26:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:54.870 "nvmf_tgt_2" 00:17:54.871 11:26:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:54.871 11:26:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:55.129 11:26:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:55.129 11:26:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:55.129 true 00:17:55.129 11:26:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:55.129 true 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.388 rmmod nvme_tcp 00:17:55.388 rmmod nvme_fabrics 00:17:55.388 rmmod nvme_keyring 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3828524 ']' 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3828524 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 3828524 ']' 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 3828524 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3828524 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3828524' 00:17:55.388 killing process with pid 3828524 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 3828524 00:17:55.388 11:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 3828524 00:17:55.647 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.647 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.647 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.647 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.647 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.647 11:26:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.647 11:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.647 11:26:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.182 11:26:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:58.182 00:17:58.182 real 0m13.305s 00:17:58.182 user 0m10.823s 00:17:58.182 sys 0m7.470s 00:17:58.182 11:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:58.182 11:26:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:58.182 ************************************ 00:17:58.182 END TEST nvmf_multitarget 00:17:58.182 ************************************ 00:17:58.182 11:26:22 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:58.182 11:26:22 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:58.182 11:26:22 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:58.182 11:26:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:58.182 ************************************ 00:17:58.182 START TEST nvmf_rpc 00:17:58.182 ************************************ 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:58.182 * Looking for test storage... 00:17:58.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:58.182 11:26:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:58.182 11:26:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:58.182 11:26:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.182 11:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.182 11:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.182 11:26:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:58.182 11:26:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:58.182 11:26:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:17:58.182 11:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:06.320 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:06.320 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:06.320 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:06.321 Found net devices under 0000:af:00.0: cvl_0_0 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:06.321 Found net devices under 0000:af:00.1: cvl_0_1 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:06.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:18:06.321 00:18:06.321 --- 10.0.0.2 ping statistics --- 00:18:06.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.321 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:06.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:18:06.321 00:18:06.321 --- 10.0.0.1 ping statistics --- 00:18:06.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.321 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:06.321 11:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.580 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3833268 00:18:06.580 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3833268 00:18:06.580 11:26:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:06.580 11:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 3833268 ']' 00:18:06.580 11:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.580 11:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:06.580 11:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.580 11:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:06.580 11:26:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.580 [2024-06-10 11:26:31.483328] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:18:06.580 [2024-06-10 11:26:31.483392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.580 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.580 [2024-06-10 11:26:31.612553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:06.838 [2024-06-10 11:26:31.700935] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.838 [2024-06-10 11:26:31.700979] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.838 [2024-06-10 11:26:31.700992] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.838 [2024-06-10 11:26:31.701004] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.838 [2024-06-10 11:26:31.701018] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.838 [2024-06-10 11:26:31.701070] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.838 [2024-06-10 11:26:31.701088] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.838 [2024-06-10 11:26:31.701203] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.838 [2024-06-10 11:26:31.701204] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:07.406 "tick_rate": 2500000000, 00:18:07.406 "poll_groups": [ 00:18:07.406 { 00:18:07.406 "name": "nvmf_tgt_poll_group_000", 00:18:07.406 "admin_qpairs": 0, 00:18:07.406 "io_qpairs": 0, 00:18:07.406 "current_admin_qpairs": 0, 00:18:07.406 "current_io_qpairs": 0, 00:18:07.406 "pending_bdev_io": 0, 00:18:07.406 "completed_nvme_io": 0, 00:18:07.406 "transports": [] 00:18:07.406 }, 00:18:07.406 { 00:18:07.406 "name": "nvmf_tgt_poll_group_001", 00:18:07.406 "admin_qpairs": 0, 00:18:07.406 "io_qpairs": 0, 00:18:07.406 "current_admin_qpairs": 0, 00:18:07.406 "current_io_qpairs": 0, 00:18:07.406 "pending_bdev_io": 0, 00:18:07.406 "completed_nvme_io": 0, 00:18:07.406 "transports": [] 00:18:07.406 }, 00:18:07.406 { 00:18:07.406 "name": "nvmf_tgt_poll_group_002", 00:18:07.406 "admin_qpairs": 0, 00:18:07.406 "io_qpairs": 0, 00:18:07.406 "current_admin_qpairs": 0, 00:18:07.406 "current_io_qpairs": 0, 00:18:07.406 "pending_bdev_io": 0, 00:18:07.406 "completed_nvme_io": 0, 00:18:07.406 "transports": [] 00:18:07.406 }, 00:18:07.406 { 00:18:07.406 "name": "nvmf_tgt_poll_group_003", 00:18:07.406 "admin_qpairs": 0, 00:18:07.406 "io_qpairs": 0, 00:18:07.406 "current_admin_qpairs": 0, 00:18:07.406 "current_io_qpairs": 0, 00:18:07.406 "pending_bdev_io": 0, 00:18:07.406 "completed_nvme_io": 0, 00:18:07.406 "transports": [] 00:18:07.406 } 00:18:07.406 ] 00:18:07.406 }' 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:07.406 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.665 [2024-06-10 11:26:32.563504] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:07.665 "tick_rate": 2500000000, 00:18:07.665 "poll_groups": [ 00:18:07.665 { 00:18:07.665 "name": "nvmf_tgt_poll_group_000", 00:18:07.665 "admin_qpairs": 0, 00:18:07.665 "io_qpairs": 0, 00:18:07.665 "current_admin_qpairs": 0, 00:18:07.665 "current_io_qpairs": 0, 00:18:07.665 "pending_bdev_io": 0, 00:18:07.665 "completed_nvme_io": 0, 00:18:07.665 "transports": [ 00:18:07.665 { 00:18:07.665 "trtype": "TCP" 00:18:07.665 } 00:18:07.665 ] 00:18:07.665 }, 00:18:07.665 { 00:18:07.665 "name": "nvmf_tgt_poll_group_001", 00:18:07.665 "admin_qpairs": 0, 00:18:07.665 "io_qpairs": 0, 00:18:07.665 "current_admin_qpairs": 0, 00:18:07.665 "current_io_qpairs": 0, 00:18:07.665 "pending_bdev_io": 0, 00:18:07.665 "completed_nvme_io": 0, 00:18:07.665 "transports": [ 00:18:07.665 { 00:18:07.665 "trtype": "TCP" 00:18:07.665 } 00:18:07.665 ] 00:18:07.665 }, 00:18:07.665 { 00:18:07.665 "name": "nvmf_tgt_poll_group_002", 00:18:07.665 "admin_qpairs": 0, 00:18:07.665 "io_qpairs": 0, 00:18:07.665 "current_admin_qpairs": 0, 00:18:07.665 "current_io_qpairs": 0, 00:18:07.665 "pending_bdev_io": 0, 00:18:07.665 "completed_nvme_io": 0, 00:18:07.665 "transports": [ 00:18:07.665 { 00:18:07.665 "trtype": "TCP" 00:18:07.665 } 00:18:07.665 ] 00:18:07.665 }, 00:18:07.665 { 00:18:07.665 "name": "nvmf_tgt_poll_group_003", 00:18:07.665 "admin_qpairs": 0, 00:18:07.665 "io_qpairs": 0, 00:18:07.665 "current_admin_qpairs": 0, 00:18:07.665 "current_io_qpairs": 0, 00:18:07.665 "pending_bdev_io": 0, 00:18:07.665 "completed_nvme_io": 0, 00:18:07.665 "transports": [ 00:18:07.665 { 00:18:07.665 "trtype": "TCP" 00:18:07.665 } 00:18:07.665 ] 00:18:07.665 } 00:18:07.665 ] 00:18:07.665 }' 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.665 Malloc1 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:07.665 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.666 [2024-06-10 11:26:32.744470] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:18:07.666 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:18:07.924 [2024-06-10 11:26:32.773114] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562' 00:18:07.924 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:07.924 could not add new controller: failed to write to nvme-fabrics device 00:18:07.924 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:18:07.924 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:07.924 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:07.924 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:07.924 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:07.924 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.924 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.924 11:26:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.924 11:26:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:09.300 11:26:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:09.301 11:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:18:09.301 11:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:09.301 11:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:18:09.301 11:26:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:11.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.248 [2024-06-10 11:26:36.289049] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562' 00:18:11.248 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:11.248 could not add new controller: failed to write to nvme-fabrics device 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.248 11:26:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.643 11:26:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:12.643 11:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:18:12.643 11:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.643 11:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:18:12.643 11:26:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:15.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.178 [2024-06-10 11:26:39.854789] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.178 11:26:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:16.116 11:26:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:16.116 11:26:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:18:16.116 11:26:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.116 11:26:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:18:16.116 11:26:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:18.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.651 [2024-06-10 11:26:43.367654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.651 11:26:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:19.586 11:26:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:19.586 11:26:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:18:19.586 11:26:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.586 11:26:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:18:19.586 11:26:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:22.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.120 [2024-06-10 11:26:46.864030] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.120 11:26:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:23.497 11:26:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:23.497 11:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:18:23.497 11:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.497 11:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:18:23.497 11:26:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:25.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.401 [2024-06-10 11:26:50.376838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.401 11:26:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:26.779 11:26:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:26.779 11:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:18:26.779 11:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:26.779 11:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:18:26.779 11:26:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:18:28.682 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:28.682 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:28.682 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:28.682 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:28.682 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:28.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.941 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.942 [2024-06-10 11:26:53.915825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.942 11:26:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:30.347 11:26:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:30.347 11:26:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:18:30.347 11:26:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.347 11:26:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:18:30.347 11:26:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:18:32.250 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:32.250 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.250 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:32.250 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:32.250 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.250 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:18:32.250 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:32.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:32.250 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:32.250 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:18:32.250 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:18:32.250 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.510 [2024-06-10 11:26:57.436157] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:32.510 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 [2024-06-10 11:26:57.484261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 [2024-06-10 11:26:57.536443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 [2024-06-10 11:26:57.584609] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.511 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.771 [2024-06-10 11:26:57.632790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.771 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:32.772 "tick_rate": 2500000000, 00:18:32.772 "poll_groups": [ 00:18:32.772 { 00:18:32.772 "name": "nvmf_tgt_poll_group_000", 00:18:32.772 "admin_qpairs": 2, 00:18:32.772 "io_qpairs": 196, 00:18:32.772 "current_admin_qpairs": 0, 00:18:32.772 "current_io_qpairs": 0, 00:18:32.772 "pending_bdev_io": 0, 00:18:32.772 "completed_nvme_io": 247, 00:18:32.772 "transports": [ 00:18:32.772 { 00:18:32.772 "trtype": "TCP" 00:18:32.772 } 00:18:32.772 ] 00:18:32.772 }, 00:18:32.772 { 00:18:32.772 "name": "nvmf_tgt_poll_group_001", 00:18:32.772 "admin_qpairs": 2, 00:18:32.772 "io_qpairs": 196, 00:18:32.772 "current_admin_qpairs": 0, 00:18:32.772 "current_io_qpairs": 0, 00:18:32.772 "pending_bdev_io": 0, 00:18:32.772 "completed_nvme_io": 297, 00:18:32.772 "transports": [ 00:18:32.772 { 00:18:32.772 "trtype": "TCP" 00:18:32.772 } 00:18:32.772 ] 00:18:32.772 }, 00:18:32.772 { 00:18:32.772 "name": "nvmf_tgt_poll_group_002", 00:18:32.772 "admin_qpairs": 1, 00:18:32.772 "io_qpairs": 196, 00:18:32.772 "current_admin_qpairs": 0, 00:18:32.772 "current_io_qpairs": 0, 00:18:32.772 "pending_bdev_io": 0, 00:18:32.772 "completed_nvme_io": 295, 00:18:32.772 "transports": [ 00:18:32.772 { 00:18:32.772 "trtype": "TCP" 00:18:32.772 } 00:18:32.772 ] 00:18:32.772 }, 00:18:32.772 { 00:18:32.772 "name": "nvmf_tgt_poll_group_003", 00:18:32.772 "admin_qpairs": 2, 00:18:32.772 "io_qpairs": 196, 00:18:32.772 "current_admin_qpairs": 0, 00:18:32.772 "current_io_qpairs": 0, 00:18:32.772 "pending_bdev_io": 0, 00:18:32.772 "completed_nvme_io": 295, 00:18:32.772 "transports": [ 00:18:32.772 { 00:18:32.772 "trtype": "TCP" 00:18:32.772 } 00:18:32.772 ] 00:18:32.772 } 00:18:32.772 ] 00:18:32.772 }' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:32.772 rmmod nvme_tcp 00:18:32.772 rmmod nvme_fabrics 00:18:32.772 rmmod nvme_keyring 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3833268 ']' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3833268 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 3833268 ']' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 3833268 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:32.772 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3833268 00:18:33.032 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:33.032 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:33.032 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3833268' 00:18:33.032 killing process with pid 3833268 00:18:33.032 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 3833268 00:18:33.032 11:26:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 3833268 00:18:33.293 11:26:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:33.293 11:26:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:33.293 11:26:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:33.293 11:26:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.293 11:26:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:33.293 11:26:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.293 11:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.293 11:26:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.202 11:27:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:35.202 00:18:35.202 real 0m37.371s 00:18:35.202 user 1m46.750s 00:18:35.202 sys 0m9.591s 00:18:35.202 11:27:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:35.202 11:27:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:35.202 ************************************ 00:18:35.202 END TEST nvmf_rpc 00:18:35.202 ************************************ 00:18:35.202 11:27:00 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:35.202 11:27:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:35.202 11:27:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:35.202 11:27:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:35.202 ************************************ 00:18:35.202 START TEST nvmf_invalid 00:18:35.202 ************************************ 00:18:35.202 11:27:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:35.461 * Looking for test storage... 00:18:35.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:35.461 11:27:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:18:35.462 11:27:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:45.443 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:45.443 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:45.443 Found net devices under 0000:af:00.0: cvl_0_0 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.443 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:45.444 Found net devices under 0000:af:00.1: cvl_0_1 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:45.444 11:27:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:45.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:18:45.444 00:18:45.444 --- 10.0.0.2 ping statistics --- 00:18:45.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.444 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:18:45.444 00:18:45.444 --- 10.0.0.1 ping statistics --- 00:18:45.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.444 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3842963 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3842963 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 3842963 ']' 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:45.444 11:27:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:45.444 [2024-06-10 11:27:09.151792] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:18:45.444 [2024-06-10 11:27:09.151853] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.444 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.444 [2024-06-10 11:27:09.270016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.444 [2024-06-10 11:27:09.356865] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.444 [2024-06-10 11:27:09.356907] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.444 [2024-06-10 11:27:09.356921] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.444 [2024-06-10 11:27:09.356933] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.444 [2024-06-10 11:27:09.356943] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.444 [2024-06-10 11:27:09.357007] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.444 [2024-06-10 11:27:09.357099] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.444 [2024-06-10 11:27:09.357187] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.444 [2024-06-10 11:27:09.357187] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.444 11:27:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:45.444 11:27:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:18:45.444 11:27:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.444 11:27:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:45.444 11:27:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:45.444 11:27:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.444 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:45.444 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8034 00:18:45.444 [2024-06-10 11:27:10.279477] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:45.444 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:45.444 { 00:18:45.444 "nqn": "nqn.2016-06.io.spdk:cnode8034", 00:18:45.444 "tgt_name": "foobar", 00:18:45.444 "method": "nvmf_create_subsystem", 00:18:45.444 "req_id": 1 00:18:45.444 } 00:18:45.444 Got JSON-RPC error response 00:18:45.444 response: 00:18:45.444 { 00:18:45.444 "code": -32603, 00:18:45.444 "message": "Unable to find target foobar" 00:18:45.444 }' 00:18:45.444 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:45.444 { 00:18:45.444 "nqn": "nqn.2016-06.io.spdk:cnode8034", 00:18:45.444 "tgt_name": "foobar", 00:18:45.444 "method": "nvmf_create_subsystem", 00:18:45.444 "req_id": 1 00:18:45.444 } 00:18:45.444 Got JSON-RPC error response 00:18:45.444 response: 00:18:45.444 { 00:18:45.444 "code": -32603, 00:18:45.444 "message": "Unable to find target foobar" 00:18:45.444 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:45.444 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:45.444 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29602 00:18:45.444 [2024-06-10 11:27:10.520436] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29602: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:45.704 { 00:18:45.704 "nqn": "nqn.2016-06.io.spdk:cnode29602", 00:18:45.704 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:45.704 "method": "nvmf_create_subsystem", 00:18:45.704 "req_id": 1 00:18:45.704 } 00:18:45.704 Got JSON-RPC error response 00:18:45.704 response: 00:18:45.704 { 00:18:45.704 "code": -32602, 00:18:45.704 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:45.704 }' 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:45.704 { 00:18:45.704 "nqn": "nqn.2016-06.io.spdk:cnode29602", 00:18:45.704 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:45.704 "method": "nvmf_create_subsystem", 00:18:45.704 "req_id": 1 00:18:45.704 } 00:18:45.704 Got JSON-RPC error response 00:18:45.704 response: 00:18:45.704 { 00:18:45.704 "code": -32602, 00:18:45.704 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:45.704 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7677 00:18:45.704 [2024-06-10 11:27:10.761181] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7677: invalid model number 'SPDK_Controller' 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:45.704 { 00:18:45.704 "nqn": "nqn.2016-06.io.spdk:cnode7677", 00:18:45.704 "model_number": "SPDK_Controller\u001f", 00:18:45.704 "method": "nvmf_create_subsystem", 00:18:45.704 "req_id": 1 00:18:45.704 } 00:18:45.704 Got JSON-RPC error response 00:18:45.704 response: 00:18:45.704 { 00:18:45.704 "code": -32602, 00:18:45.704 "message": "Invalid MN SPDK_Controller\u001f" 00:18:45.704 }' 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:45.704 { 00:18:45.704 "nqn": "nqn.2016-06.io.spdk:cnode7677", 00:18:45.704 "model_number": "SPDK_Controller\u001f", 00:18:45.704 "method": "nvmf_create_subsystem", 00:18:45.704 "req_id": 1 00:18:45.704 } 00:18:45.704 Got JSON-RPC error response 00:18:45.704 response: 00:18:45.704 { 00:18:45.704 "code": -32602, 00:18:45.704 "message": "Invalid MN SPDK_Controller\u001f" 00:18:45.704 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:45.704 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.013 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.014 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:18:46.014 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ')#;Ij7h6?IRU%@P0PGDr' 00:18:46.014 11:27:10 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ')#;Ij7h6?IRU%@P0PGDr' nqn.2016-06.io.spdk:cnode18501 00:18:46.278 [2024-06-10 11:27:11.166645] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18501: invalid serial number ')#;Ij7h6?IRU%@P0PGDr' 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:46.278 { 00:18:46.278 "nqn": "nqn.2016-06.io.spdk:cnode18501", 00:18:46.278 "serial_number": ")#;Ij7h6?\u007fIRU%@P0PGDr", 00:18:46.278 "method": "nvmf_create_subsystem", 00:18:46.278 "req_id": 1 00:18:46.278 } 00:18:46.278 Got JSON-RPC error response 00:18:46.278 response: 00:18:46.278 { 00:18:46.278 "code": -32602, 00:18:46.278 "message": "Invalid SN )#;Ij7h6?\u007fIRU%@P0PGDr" 00:18:46.278 }' 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:46.278 { 00:18:46.278 "nqn": "nqn.2016-06.io.spdk:cnode18501", 00:18:46.278 "serial_number": ")#;Ij7h6?\u007fIRU%@P0PGDr", 00:18:46.278 "method": "nvmf_create_subsystem", 00:18:46.278 "req_id": 1 00:18:46.278 } 00:18:46.278 Got JSON-RPC error response 00:18:46.278 response: 00:18:46.278 { 00:18:46.278 "code": -32602, 00:18:46.278 "message": "Invalid SN )#;Ij7h6?\u007fIRU%@P0PGDr" 00:18:46.278 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.278 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.279 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.538 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ S == \- ]] 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'S(Jy}JeV[K:ajwUvu!%weH?n1tc'\''6-8NWu!%?K>' 00:18:46.539 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'S(Jy}JeV[K:ajwUvu!%weH?n1tc'\''6-8NWu!%?K>' nqn.2016-06.io.spdk:cnode3963 00:18:46.798 [2024-06-10 11:27:11.728635] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3963: invalid model number 'S(Jy}JeV[K:ajwUvu!%weH?n1tc'6-8NWu!%?K>' 00:18:46.798 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:46.798 { 00:18:46.798 "nqn": "nqn.2016-06.io.spdk:cnode3963", 00:18:46.798 "model_number": "S(Jy}JeV[K:ajwUvu!%weH?n1tc'\''6\u007f-8NWu!%?K>\u007f", 00:18:46.798 "method": "nvmf_create_subsystem", 00:18:46.798 "req_id": 1 00:18:46.798 } 00:18:46.798 Got JSON-RPC error response 00:18:46.798 response: 00:18:46.798 { 00:18:46.798 "code": -32602, 00:18:46.798 "message": "Invalid MN S(Jy}JeV[K:ajwUvu!%weH?n1tc'\''6\u007f-8NWu!%?K>\u007f" 00:18:46.798 }' 00:18:46.798 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:46.798 { 00:18:46.798 "nqn": "nqn.2016-06.io.spdk:cnode3963", 00:18:46.798 "model_number": "S(Jy}JeV[K:ajwUvu!%weH?n1tc'6\u007f-8NWu!%?K>\u007f", 00:18:46.798 "method": "nvmf_create_subsystem", 00:18:46.798 "req_id": 1 00:18:46.798 } 00:18:46.798 Got JSON-RPC error response 00:18:46.798 response: 00:18:46.798 { 00:18:46.798 "code": -32602, 00:18:46.798 "message": "Invalid MN S(Jy}JeV[K:ajwUvu!%weH?n1tc'6\u007f-8NWu!%?K>\u007f" 00:18:46.798 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:46.798 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:47.056 [2024-06-10 11:27:11.909334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.056 11:27:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:47.316 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:47.316 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:18:47.316 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:47.316 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:18:47.316 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:47.316 [2024-06-10 11:27:12.399088] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:47.575 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:47.575 { 00:18:47.575 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:47.575 "listen_address": { 00:18:47.575 "trtype": "tcp", 00:18:47.575 "traddr": "", 00:18:47.575 "trsvcid": "4421" 00:18:47.575 }, 00:18:47.575 "method": "nvmf_subsystem_remove_listener", 00:18:47.575 "req_id": 1 00:18:47.575 } 00:18:47.575 Got JSON-RPC error response 00:18:47.575 response: 00:18:47.575 { 00:18:47.575 "code": -32602, 00:18:47.575 "message": "Invalid parameters" 00:18:47.575 }' 00:18:47.575 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:47.575 { 00:18:47.575 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:47.575 "listen_address": { 00:18:47.575 "trtype": "tcp", 00:18:47.575 "traddr": "", 00:18:47.575 "trsvcid": "4421" 00:18:47.575 }, 00:18:47.575 "method": "nvmf_subsystem_remove_listener", 00:18:47.575 "req_id": 1 00:18:47.575 } 00:18:47.575 Got JSON-RPC error response 00:18:47.575 response: 00:18:47.575 { 00:18:47.575 "code": -32602, 00:18:47.575 "message": "Invalid parameters" 00:18:47.575 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:47.575 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16325 -i 0 00:18:47.575 [2024-06-10 11:27:12.631822] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16325: invalid cntlid range [0-65519] 00:18:47.575 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:47.575 { 00:18:47.575 "nqn": "nqn.2016-06.io.spdk:cnode16325", 00:18:47.575 "min_cntlid": 0, 00:18:47.575 "method": "nvmf_create_subsystem", 00:18:47.575 "req_id": 1 00:18:47.575 } 00:18:47.575 Got JSON-RPC error response 00:18:47.575 response: 00:18:47.575 { 00:18:47.575 "code": -32602, 00:18:47.575 "message": "Invalid cntlid range [0-65519]" 00:18:47.575 }' 00:18:47.575 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:47.575 { 00:18:47.575 "nqn": "nqn.2016-06.io.spdk:cnode16325", 00:18:47.575 "min_cntlid": 0, 00:18:47.575 "method": "nvmf_create_subsystem", 00:18:47.575 "req_id": 1 00:18:47.575 } 00:18:47.575 Got JSON-RPC error response 00:18:47.575 response: 00:18:47.575 { 00:18:47.575 "code": -32602, 00:18:47.575 "message": "Invalid cntlid range [0-65519]" 00:18:47.575 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:47.575 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19323 -i 65520 00:18:47.834 [2024-06-10 11:27:12.868645] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19323: invalid cntlid range [65520-65519] 00:18:47.834 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:47.834 { 00:18:47.834 "nqn": "nqn.2016-06.io.spdk:cnode19323", 00:18:47.834 "min_cntlid": 65520, 00:18:47.834 "method": "nvmf_create_subsystem", 00:18:47.834 "req_id": 1 00:18:47.834 } 00:18:47.834 Got JSON-RPC error response 00:18:47.834 response: 00:18:47.834 { 00:18:47.834 "code": -32602, 00:18:47.834 "message": "Invalid cntlid range [65520-65519]" 00:18:47.834 }' 00:18:47.834 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:47.834 { 00:18:47.834 "nqn": "nqn.2016-06.io.spdk:cnode19323", 00:18:47.834 "min_cntlid": 65520, 00:18:47.834 "method": "nvmf_create_subsystem", 00:18:47.834 "req_id": 1 00:18:47.834 } 00:18:47.834 Got JSON-RPC error response 00:18:47.834 response: 00:18:47.834 { 00:18:47.834 "code": -32602, 00:18:47.834 "message": "Invalid cntlid range [65520-65519]" 00:18:47.834 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:47.834 11:27:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6057 -I 0 00:18:48.093 [2024-06-10 11:27:13.109500] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6057: invalid cntlid range [1-0] 00:18:48.094 11:27:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:48.094 { 00:18:48.094 "nqn": "nqn.2016-06.io.spdk:cnode6057", 00:18:48.094 "max_cntlid": 0, 00:18:48.094 "method": "nvmf_create_subsystem", 00:18:48.094 "req_id": 1 00:18:48.094 } 00:18:48.094 Got JSON-RPC error response 00:18:48.094 response: 00:18:48.094 { 00:18:48.094 "code": -32602, 00:18:48.094 "message": "Invalid cntlid range [1-0]" 00:18:48.094 }' 00:18:48.094 11:27:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:48.094 { 00:18:48.094 "nqn": "nqn.2016-06.io.spdk:cnode6057", 00:18:48.094 "max_cntlid": 0, 00:18:48.094 "method": "nvmf_create_subsystem", 00:18:48.094 "req_id": 1 00:18:48.094 } 00:18:48.094 Got JSON-RPC error response 00:18:48.094 response: 00:18:48.094 { 00:18:48.094 "code": -32602, 00:18:48.094 "message": "Invalid cntlid range [1-0]" 00:18:48.094 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:48.094 11:27:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12077 -I 65520 00:18:48.353 [2024-06-10 11:27:13.342353] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12077: invalid cntlid range [1-65520] 00:18:48.353 11:27:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:48.353 { 00:18:48.353 "nqn": "nqn.2016-06.io.spdk:cnode12077", 00:18:48.353 "max_cntlid": 65520, 00:18:48.353 "method": "nvmf_create_subsystem", 00:18:48.353 "req_id": 1 00:18:48.353 } 00:18:48.353 Got JSON-RPC error response 00:18:48.353 response: 00:18:48.353 { 00:18:48.353 "code": -32602, 00:18:48.353 "message": "Invalid cntlid range [1-65520]" 00:18:48.353 }' 00:18:48.353 11:27:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:48.353 { 00:18:48.353 "nqn": "nqn.2016-06.io.spdk:cnode12077", 00:18:48.353 "max_cntlid": 65520, 00:18:48.353 "method": "nvmf_create_subsystem", 00:18:48.353 "req_id": 1 00:18:48.353 } 00:18:48.353 Got JSON-RPC error response 00:18:48.353 response: 00:18:48.353 { 00:18:48.353 "code": -32602, 00:18:48.353 "message": "Invalid cntlid range [1-65520]" 00:18:48.353 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:48.353 11:27:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9615 -i 6 -I 5 00:18:48.612 [2024-06-10 11:27:13.583216] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9615: invalid cntlid range [6-5] 00:18:48.612 11:27:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:48.612 { 00:18:48.612 "nqn": "nqn.2016-06.io.spdk:cnode9615", 00:18:48.612 "min_cntlid": 6, 00:18:48.612 "max_cntlid": 5, 00:18:48.612 "method": "nvmf_create_subsystem", 00:18:48.612 "req_id": 1 00:18:48.612 } 00:18:48.612 Got JSON-RPC error response 00:18:48.612 response: 00:18:48.612 { 00:18:48.612 "code": -32602, 00:18:48.612 "message": "Invalid cntlid range [6-5]" 00:18:48.612 }' 00:18:48.612 11:27:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:48.612 { 00:18:48.612 "nqn": "nqn.2016-06.io.spdk:cnode9615", 00:18:48.612 "min_cntlid": 6, 00:18:48.612 "max_cntlid": 5, 00:18:48.612 "method": "nvmf_create_subsystem", 00:18:48.612 "req_id": 1 00:18:48.612 } 00:18:48.612 Got JSON-RPC error response 00:18:48.612 response: 00:18:48.612 { 00:18:48.612 "code": -32602, 00:18:48.612 "message": "Invalid cntlid range [6-5]" 00:18:48.612 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:48.612 11:27:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:48.871 { 00:18:48.871 "name": "foobar", 00:18:48.871 "method": "nvmf_delete_target", 00:18:48.871 "req_id": 1 00:18:48.871 } 00:18:48.871 Got JSON-RPC error response 00:18:48.871 response: 00:18:48.871 { 00:18:48.871 "code": -32602, 00:18:48.871 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:48.871 }' 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:48.871 { 00:18:48.871 "name": "foobar", 00:18:48.871 "method": "nvmf_delete_target", 00:18:48.871 "req_id": 1 00:18:48.871 } 00:18:48.871 Got JSON-RPC error response 00:18:48.871 response: 00:18:48.871 { 00:18:48.871 "code": -32602, 00:18:48.871 "message": "The specified target doesn't exist, cannot delete it." 00:18:48.871 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:48.871 rmmod nvme_tcp 00:18:48.871 rmmod nvme_fabrics 00:18:48.871 rmmod nvme_keyring 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3842963 ']' 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3842963 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 3842963 ']' 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 3842963 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3842963 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3842963' 00:18:48.871 killing process with pid 3842963 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 3842963 00:18:48.871 11:27:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 3842963 00:18:49.131 11:27:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:49.131 11:27:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:49.131 11:27:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:49.131 11:27:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:49.131 11:27:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:49.131 11:27:14 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.131 11:27:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.131 11:27:14 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.037 11:27:16 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:51.297 00:18:51.297 real 0m15.849s 00:18:51.297 user 0m23.935s 00:18:51.297 sys 0m7.880s 00:18:51.297 11:27:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:51.297 11:27:16 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:51.297 ************************************ 00:18:51.297 END TEST nvmf_invalid 00:18:51.297 ************************************ 00:18:51.297 11:27:16 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:18:51.297 11:27:16 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:51.297 11:27:16 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:51.297 11:27:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:51.297 ************************************ 00:18:51.297 START TEST nvmf_abort 00:18:51.297 ************************************ 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:18:51.297 * Looking for test storage... 00:18:51.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:51.297 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:51.298 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:51.298 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:51.298 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.298 11:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.298 11:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.298 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:51.298 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:51.298 11:27:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:18:51.298 11:27:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:01.282 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:01.282 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:01.282 Found net devices under 0000:af:00.0: cvl_0_0 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:01.282 Found net devices under 0000:af:00.1: cvl_0_1 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:01.282 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.283 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.283 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.283 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:01.283 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.283 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.283 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:01.283 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.283 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.283 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:01.283 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:01.283 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.283 11:27:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:01.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:19:01.283 00:19:01.283 --- 10.0.0.2 ping statistics --- 00:19:01.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.283 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:19:01.283 00:19:01.283 --- 10.0.0.1 ping statistics --- 00:19:01.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.283 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3848396 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3848396 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 3848396 ']' 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:01.283 11:27:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:01.283 [2024-06-10 11:27:25.284008] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:19:01.283 [2024-06-10 11:27:25.284078] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.283 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.283 [2024-06-10 11:27:25.402366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:01.283 [2024-06-10 11:27:25.490316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.283 [2024-06-10 11:27:25.490353] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.283 [2024-06-10 11:27:25.490368] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.283 [2024-06-10 11:27:25.490380] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.283 [2024-06-10 11:27:25.490391] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.283 [2024-06-10 11:27:25.490516] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.283 [2024-06-10 11:27:25.490630] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.283 [2024-06-10 11:27:25.490631] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:01.283 [2024-06-10 11:27:26.248055] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:01.283 Malloc0 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:01.283 Delay0 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:01.283 [2024-06-10 11:27:26.322902] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.283 11:27:26 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:19:01.283 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.542 [2024-06-10 11:27:26.453596] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:03.445 Initializing NVMe Controllers 00:19:03.445 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:03.445 controller IO queue size 128 less than required 00:19:03.445 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:19:03.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:19:03.445 Initialization complete. Launching workers. 00:19:03.445 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30422 00:19:03.445 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30483, failed to submit 62 00:19:03.445 success 30426, unsuccess 57, failed 0 00:19:03.445 11:27:28 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:03.445 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.445 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:03.445 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.445 11:27:28 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:19:03.445 11:27:28 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:19:03.445 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:03.445 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:19:03.445 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:03.445 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:19:03.445 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.445 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:03.445 rmmod nvme_tcp 00:19:03.445 rmmod nvme_fabrics 00:19:03.705 rmmod nvme_keyring 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3848396 ']' 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3848396 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 3848396 ']' 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 3848396 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3848396 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3848396' 00:19:03.705 killing process with pid 3848396 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 3848396 00:19:03.705 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 3848396 00:19:03.964 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:03.964 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:03.964 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:03.964 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:03.964 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:03.964 11:27:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.964 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.964 11:27:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.870 11:27:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:05.870 00:19:05.870 real 0m14.729s 00:19:05.870 user 0m14.112s 00:19:05.870 sys 0m7.985s 00:19:05.870 11:27:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:05.870 11:27:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:19:05.870 ************************************ 00:19:05.870 END TEST nvmf_abort 00:19:05.870 ************************************ 00:19:06.129 11:27:30 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:06.129 11:27:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:06.129 11:27:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:06.129 11:27:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.130 ************************************ 00:19:06.130 START TEST nvmf_ns_hotplug_stress 00:19:06.130 ************************************ 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:19:06.130 * Looking for test storage... 00:19:06.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:19:06.130 11:27:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:16.110 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:16.110 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:16.110 Found net devices under 0000:af:00.0: cvl_0_0 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:16.110 Found net devices under 0000:af:00.1: cvl_0_1 00:19:16.110 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.111 11:27:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:16.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:19:16.111 00:19:16.111 --- 10.0.0.2 ping statistics --- 00:19:16.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.111 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:19:16.111 00:19:16.111 --- 10.0.0.1 ping statistics --- 00:19:16.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.111 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3853636 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3853636 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 3853636 ']' 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:16.111 11:27:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:16.111 [2024-06-10 11:27:40.250506] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:19:16.111 [2024-06-10 11:27:40.250567] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.111 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.111 [2024-06-10 11:27:40.369246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:16.111 [2024-06-10 11:27:40.449777] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.111 [2024-06-10 11:27:40.449826] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.111 [2024-06-10 11:27:40.449839] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.111 [2024-06-10 11:27:40.449851] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.111 [2024-06-10 11:27:40.449862] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.111 [2024-06-10 11:27:40.449970] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.111 [2024-06-10 11:27:40.450084] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.111 [2024-06-10 11:27:40.450085] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.111 11:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:16.111 11:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:19:16.111 11:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:16.111 11:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:16.111 11:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:16.111 11:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.111 11:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:19:16.111 11:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:16.370 [2024-06-10 11:27:41.419955] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.370 11:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:16.629 11:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.888 [2024-06-10 11:27:41.899083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.888 11:27:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:17.151 11:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:19:17.409 Malloc0 00:19:17.409 11:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:17.668 Delay0 00:19:17.668 11:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:17.926 11:27:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:19:18.186 NULL1 00:19:18.186 11:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:18.445 11:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3854198 00:19:18.445 11:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:19:18.445 11:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:18.445 11:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:18.445 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.703 Read completed with error (sct=0, sc=11) 00:19:18.703 11:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:18.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:18.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:18.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:18.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:18.962 11:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:19:18.962 11:27:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:19:19.221 true 00:19:19.221 11:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:19.221 11:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:19.789 11:27:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:20.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:20.048 11:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:19:20.048 11:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:19:20.307 true 00:19:20.307 11:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:20.307 11:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:20.565 11:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:20.823 11:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:19:20.823 11:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:19:21.082 true 00:19:21.082 11:27:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:21.082 11:27:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.114 11:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:22.373 11:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:19:22.373 11:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:19:22.373 true 00:19:22.631 11:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:22.631 11:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.631 11:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:22.890 11:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:19:22.890 11:27:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:19:23.149 true 00:19:23.149 11:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:23.149 11:27:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:24.084 11:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:24.343 11:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:19:24.343 11:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:19:24.343 true 00:19:24.602 11:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:24.602 11:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:24.602 11:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:24.861 11:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:19:24.861 11:27:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:19:25.119 true 00:19:25.119 11:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:25.119 11:27:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:26.056 11:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:26.315 11:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:19:26.315 11:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:19:26.574 true 00:19:26.575 11:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:26.575 11:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:26.834 11:27:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:27.092 11:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:19:27.092 11:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:19:27.351 true 00:19:27.351 11:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:27.351 11:27:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:28.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:28.286 11:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:28.544 11:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:19:28.544 11:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:19:28.544 true 00:19:28.803 11:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:28.803 11:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:28.803 11:27:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:29.061 11:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:19:29.061 11:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:19:29.320 true 00:19:29.320 11:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:29.320 11:27:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:30.256 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:30.256 11:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:30.515 11:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:19:30.515 11:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:19:30.773 true 00:19:30.773 11:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:30.773 11:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:31.032 11:27:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:31.290 11:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:19:31.290 11:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:19:31.290 true 00:19:31.290 11:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:31.290 11:27:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:32.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:32.666 11:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:32.666 11:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:19:32.666 11:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:19:32.924 true 00:19:32.924 11:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:32.924 11:27:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:33.183 11:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:33.441 11:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:19:33.441 11:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:19:33.441 true 00:19:33.699 11:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:33.699 11:27:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:34.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:34.636 11:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:34.636 11:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:19:34.636 11:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:19:34.894 true 00:19:34.894 11:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:34.894 11:27:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:35.153 11:28:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:35.411 11:28:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:19:35.411 11:28:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:19:35.669 true 00:19:35.669 11:28:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:35.669 11:28:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:36.605 11:28:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:36.863 11:28:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:19:36.863 11:28:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:19:37.121 true 00:19:37.121 11:28:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:37.121 11:28:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:37.380 11:28:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:37.639 11:28:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:19:37.639 11:28:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:19:37.897 true 00:19:37.897 11:28:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:37.897 11:28:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:38.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:38.832 11:28:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:38.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:38.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:38.832 11:28:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:19:38.832 11:28:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:19:39.091 true 00:19:39.091 11:28:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:39.091 11:28:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:39.350 11:28:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:39.608 11:28:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:19:39.608 11:28:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:19:39.867 true 00:19:39.867 11:28:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:39.867 11:28:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:40.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:40.804 11:28:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:41.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:41.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:41.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:41.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:41.062 11:28:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:19:41.062 11:28:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:19:41.321 true 00:19:41.321 11:28:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:41.321 11:28:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:42.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:42.258 11:28:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:42.517 11:28:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:19:42.517 11:28:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:19:42.517 true 00:19:42.517 11:28:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:42.517 11:28:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:42.776 11:28:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:43.035 11:28:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:19:43.035 11:28:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:19:43.294 true 00:19:43.294 11:28:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:43.294 11:28:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:44.231 11:28:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:44.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:44.491 11:28:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:19:44.491 11:28:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:19:44.491 true 00:19:44.491 11:28:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:44.491 11:28:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:44.753 11:28:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:45.102 11:28:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:19:45.102 11:28:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:19:45.361 true 00:19:45.361 11:28:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:45.361 11:28:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:46.297 11:28:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:46.556 11:28:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:19:46.556 11:28:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:19:46.556 true 00:19:46.556 11:28:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:46.556 11:28:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:46.815 11:28:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:47.073 11:28:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:19:47.073 11:28:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:19:47.073 true 00:19:47.332 11:28:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:47.332 11:28:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:48.270 11:28:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:48.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:48.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:48.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:48.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:48.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:48.529 11:28:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:19:48.529 11:28:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:19:48.788 true 00:19:48.788 11:28:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:48.788 11:28:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:49.726 11:28:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:49.726 Initializing NVMe Controllers 00:19:49.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:49.726 Controller IO queue size 128, less than required. 00:19:49.726 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:49.726 Controller IO queue size 128, less than required. 00:19:49.726 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:49.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:49.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:49.726 Initialization complete. Launching workers. 00:19:49.726 ======================================================== 00:19:49.726 Latency(us) 00:19:49.726 Device Information : IOPS MiB/s Average min max 00:19:49.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 889.70 0.43 86688.06 2456.50 1104881.93 00:19:49.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15329.71 7.49 8349.49 2231.35 503349.25 00:19:49.726 ======================================================== 00:19:49.726 Total : 16219.41 7.92 12646.66 2231.35 1104881.93 00:19:49.726 00:19:49.985 11:28:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:19:49.985 11:28:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:19:49.985 true 00:19:49.985 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3854198 00:19:49.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3854198) - No such process 00:19:49.985 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3854198 00:19:49.985 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:50.244 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:50.503 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:19:50.503 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:19:50.503 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:19:50.503 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:50.503 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:19:50.762 null0 00:19:50.762 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:50.762 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:50.762 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:19:51.020 null1 00:19:51.020 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:51.020 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:51.020 11:28:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:19:51.279 null2 00:19:51.279 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:51.279 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:51.279 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:19:51.538 null3 00:19:51.538 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:51.538 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:51.538 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:19:51.538 null4 00:19:51.538 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:51.538 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:51.538 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:19:51.797 null5 00:19:51.797 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:51.797 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:51.797 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:19:52.056 null6 00:19:52.056 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:52.056 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:52.056 11:28:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:19:52.316 null7 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:52.316 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3859973 3859975 3859978 3859981 3859984 3859986 3859989 3859991 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.317 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:52.577 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:52.577 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:52.577 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:52.577 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:52.577 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:52.577 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:52.577 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:52.577 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.836 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:52.837 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:52.837 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.837 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:52.837 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:52.837 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.837 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:52.837 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:52.837 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:52.837 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:52.837 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:53.096 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:53.096 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:53.096 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:53.096 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:53.096 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:53.096 11:28:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.096 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:53.355 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.355 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.355 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:53.355 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:53.355 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:53.355 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:53.355 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:53.355 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:53.355 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:53.355 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:53.615 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:53.875 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:53.875 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:53.875 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:53.875 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:53.875 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:53.875 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:53.875 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:53.875 11:28:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.134 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:54.135 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.135 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.135 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:54.394 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:54.394 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:54.394 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:54.394 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:54.394 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:54.394 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:54.394 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:54.394 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:54.654 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:54.914 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:54.914 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:54.914 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:54.914 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:54.914 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:54.914 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:54.914 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.914 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.914 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:54.914 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.914 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.914 11:28:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:54.914 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:54.914 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:54.914 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:55.174 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:55.433 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.434 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:55.692 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:55.951 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.951 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.951 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:55.951 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.951 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.951 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:55.951 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.951 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.951 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:55.951 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.951 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.951 11:28:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:55.951 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.951 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.951 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:55.951 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.951 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.951 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:55.951 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.951 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.951 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:55.951 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:55.951 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:55.951 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:55.951 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:56.211 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:56.211 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:56.211 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:56.211 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:56.211 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:56.211 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:56.211 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:56.211 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.211 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.211 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:56.471 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:56.731 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:56.731 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:56.731 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:56.731 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:56.731 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:56.731 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:56.731 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.731 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.731 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.731 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.731 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.731 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.990 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.990 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.990 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.990 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.990 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.990 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.991 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.991 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.991 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:56.991 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:56.991 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:56.991 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:19:56.991 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:56.991 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:19:56.991 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:56.991 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:19:56.991 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:56.991 11:28:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:56.991 rmmod nvme_tcp 00:19:56.991 rmmod nvme_fabrics 00:19:56.991 rmmod nvme_keyring 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3853636 ']' 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3853636 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 3853636 ']' 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 3853636 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3853636 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3853636' 00:19:56.991 killing process with pid 3853636 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 3853636 00:19:56.991 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 3853636 00:19:57.250 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:57.250 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:57.250 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:57.250 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.250 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:57.250 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.250 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.250 11:28:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.788 11:28:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:59.788 00:19:59.788 real 0m53.340s 00:19:59.788 user 3m24.862s 00:19:59.788 sys 0m23.873s 00:19:59.788 11:28:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:59.788 11:28:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:59.788 ************************************ 00:19:59.788 END TEST nvmf_ns_hotplug_stress 00:19:59.788 ************************************ 00:19:59.788 11:28:24 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:59.788 11:28:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:59.788 11:28:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:59.788 11:28:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:59.788 ************************************ 00:19:59.788 START TEST nvmf_connect_stress 00:19:59.788 ************************************ 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:59.788 * Looking for test storage... 00:19:59.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:59.788 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:59.789 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:59.789 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.789 11:28:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.789 11:28:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.789 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:59.789 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:59.789 11:28:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:19:59.789 11:28:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.912 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:08.172 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:08.172 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:08.172 Found net devices under 0000:af:00.0: cvl_0_0 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:08.172 Found net devices under 0000:af:00.1: cvl_0_1 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.172 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.173 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.173 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:08.173 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:08.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:20:08.432 00:20:08.432 --- 10.0.0.2 ping statistics --- 00:20:08.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.432 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:20:08.432 00:20:08.432 --- 10.0.0.1 ping statistics --- 00:20:08.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.432 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3865522 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3865522 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 3865522 ']' 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:08.432 11:28:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:08.432 [2024-06-10 11:28:33.440903] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:20:08.432 [2024-06-10 11:28:33.440962] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.432 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.692 [2024-06-10 11:28:33.557584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:08.692 [2024-06-10 11:28:33.641739] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.692 [2024-06-10 11:28:33.641787] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.692 [2024-06-10 11:28:33.641801] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.692 [2024-06-10 11:28:33.641813] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.692 [2024-06-10 11:28:33.641822] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.692 [2024-06-10 11:28:33.641949] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.692 [2024-06-10 11:28:33.642058] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.692 [2024-06-10 11:28:33.642058] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:20:09.260 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:09.260 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:20:09.260 11:28:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.260 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:09.260 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:09.519 11:28:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.519 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:09.519 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.519 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:09.519 [2024-06-10 11:28:34.409869] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.519 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.519 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:09.520 [2024-06-10 11:28:34.439735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:09.520 NULL1 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3865661 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.520 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:10.131 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.131 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:10.131 11:28:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:10.131 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.131 11:28:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:10.131 11:28:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.131 11:28:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:10.131 11:28:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:10.131 11:28:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.131 11:28:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:10.699 11:28:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.699 11:28:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:10.699 11:28:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:10.699 11:28:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.699 11:28:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:10.957 11:28:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.957 11:28:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:10.957 11:28:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:10.957 11:28:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.957 11:28:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:11.216 11:28:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.216 11:28:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:11.216 11:28:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:11.216 11:28:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.216 11:28:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:11.475 11:28:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.475 11:28:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:11.475 11:28:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:11.475 11:28:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.475 11:28:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:11.734 11:28:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.734 11:28:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:11.734 11:28:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:11.734 11:28:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.734 11:28:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:12.381 11:28:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.381 11:28:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:12.381 11:28:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:12.381 11:28:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.381 11:28:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:12.641 11:28:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.641 11:28:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:12.641 11:28:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:12.641 11:28:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.641 11:28:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:12.900 11:28:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.900 11:28:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:12.900 11:28:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:12.900 11:28:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.900 11:28:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:13.159 11:28:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.159 11:28:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:13.159 11:28:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:13.159 11:28:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.159 11:28:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:13.418 11:28:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.418 11:28:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:13.418 11:28:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:13.418 11:28:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.418 11:28:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:13.676 11:28:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.676 11:28:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:13.676 11:28:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:13.676 11:28:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.676 11:28:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:14.243 11:28:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.243 11:28:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:14.243 11:28:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:14.243 11:28:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.243 11:28:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:14.501 11:28:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.501 11:28:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:14.501 11:28:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:14.501 11:28:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.501 11:28:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:14.760 11:28:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.760 11:28:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:14.760 11:28:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:14.760 11:28:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.760 11:28:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:15.019 11:28:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.019 11:28:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:15.019 11:28:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:15.019 11:28:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.019 11:28:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:15.277 11:28:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.277 11:28:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:15.277 11:28:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:15.277 11:28:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.277 11:28:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:15.845 11:28:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.845 11:28:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:15.845 11:28:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:15.845 11:28:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.845 11:28:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.103 11:28:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.103 11:28:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:16.103 11:28:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:16.103 11:28:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.103 11:28:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.361 11:28:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.361 11:28:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:16.361 11:28:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:16.361 11:28:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.361 11:28:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:16.619 11:28:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.619 11:28:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:16.619 11:28:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:16.619 11:28:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.619 11:28:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:17.186 11:28:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.186 11:28:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:17.186 11:28:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:17.186 11:28:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.186 11:28:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:17.444 11:28:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.444 11:28:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:17.444 11:28:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:17.444 11:28:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.444 11:28:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:17.702 11:28:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.702 11:28:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:17.702 11:28:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:17.703 11:28:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.703 11:28:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:17.961 11:28:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.961 11:28:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:17.961 11:28:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:17.961 11:28:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.961 11:28:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:18.220 11:28:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.220 11:28:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:18.220 11:28:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:18.220 11:28:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.220 11:28:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:18.788 11:28:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.788 11:28:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:18.788 11:28:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:18.788 11:28:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.788 11:28:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.047 11:28:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.047 11:28:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:19.047 11:28:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.047 11:28:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.047 11:28:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.306 11:28:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.306 11:28:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:19.306 11:28:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.306 11:28:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.306 11:28:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.564 11:28:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.564 11:28:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:19.564 11:28:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:20:19.564 11:28:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.564 11:28:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:19.564 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.133 11:28:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.133 11:28:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3865661 00:20:20.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3865661) - No such process 00:20:20.133 11:28:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3865661 00:20:20.133 11:28:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:20:20.133 11:28:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:20.133 11:28:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:20:20.133 11:28:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.133 11:28:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:20:20.133 11:28:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.133 11:28:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:20:20.133 11:28:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.133 11:28:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.133 rmmod nvme_tcp 00:20:20.133 rmmod nvme_fabrics 00:20:20.133 rmmod nvme_keyring 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3865522 ']' 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3865522 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 3865522 ']' 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 3865522 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3865522 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3865522' 00:20:20.133 killing process with pid 3865522 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 3865522 00:20:20.133 11:28:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 3865522 00:20:20.392 11:28:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:20.392 11:28:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:20.392 11:28:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:20.392 11:28:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.392 11:28:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:20.392 11:28:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.392 11:28:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.392 11:28:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.297 11:28:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:22.297 00:20:22.297 real 0m22.901s 00:20:22.297 user 0m42.664s 00:20:22.297 sys 0m11.494s 00:20:22.297 11:28:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:22.297 11:28:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:20:22.297 ************************************ 00:20:22.297 END TEST nvmf_connect_stress 00:20:22.297 ************************************ 00:20:22.556 11:28:47 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:22.556 11:28:47 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:22.556 11:28:47 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:22.556 11:28:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:22.556 ************************************ 00:20:22.556 START TEST nvmf_fused_ordering 00:20:22.556 ************************************ 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:20:22.556 * Looking for test storage... 00:20:22.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.556 11:28:47 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:20:22.557 11:28:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:32.540 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:32.540 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:32.541 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:32.541 Found net devices under 0000:af:00.0: cvl_0_0 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:32.541 Found net devices under 0000:af:00.1: cvl_0_1 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.541 11:28:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:32.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:20:32.541 00:20:32.541 --- 10.0.0.2 ping statistics --- 00:20:32.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.541 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:32.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:20:32.541 00:20:32.541 --- 10.0.0.1 ping statistics --- 00:20:32.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.541 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3871886 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3871886 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 3871886 ']' 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:32.541 11:28:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.541 [2024-06-10 11:28:56.283610] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:20:32.541 [2024-06-10 11:28:56.283670] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.541 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.541 [2024-06-10 11:28:56.403381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.541 [2024-06-10 11:28:56.484588] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.541 [2024-06-10 11:28:56.484630] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.541 [2024-06-10 11:28:56.484643] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.541 [2024-06-10 11:28:56.484655] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.541 [2024-06-10 11:28:56.484665] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.541 [2024-06-10 11:28:56.484693] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.541 [2024-06-10 11:28:57.227917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.541 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.542 [2024-06-10 11:28:57.248127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.542 NULL1 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.542 11:28:57 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:32.542 [2024-06-10 11:28:57.304920] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:20:32.542 [2024-06-10 11:28:57.304958] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3872145 ] 00:20:32.542 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.801 Attached to nqn.2016-06.io.spdk:cnode1 00:20:32.801 Namespace ID: 1 size: 1GB 00:20:32.801 fused_ordering(0) 00:20:32.801 fused_ordering(1) 00:20:32.801 fused_ordering(2) 00:20:32.801 fused_ordering(3) 00:20:32.801 fused_ordering(4) 00:20:32.801 fused_ordering(5) 00:20:32.801 fused_ordering(6) 00:20:32.801 fused_ordering(7) 00:20:32.801 fused_ordering(8) 00:20:32.801 fused_ordering(9) 00:20:32.801 fused_ordering(10) 00:20:32.801 fused_ordering(11) 00:20:32.801 fused_ordering(12) 00:20:32.801 fused_ordering(13) 00:20:32.801 fused_ordering(14) 00:20:32.801 fused_ordering(15) 00:20:32.801 fused_ordering(16) 00:20:32.801 fused_ordering(17) 00:20:32.801 fused_ordering(18) 00:20:32.801 fused_ordering(19) 00:20:32.801 fused_ordering(20) 00:20:32.802 fused_ordering(21) 00:20:32.802 fused_ordering(22) 00:20:32.802 fused_ordering(23) 00:20:32.802 fused_ordering(24) 00:20:32.802 fused_ordering(25) 00:20:32.802 fused_ordering(26) 00:20:32.802 fused_ordering(27) 00:20:32.802 fused_ordering(28) 00:20:32.802 fused_ordering(29) 00:20:32.802 fused_ordering(30) 00:20:32.802 fused_ordering(31) 00:20:32.802 fused_ordering(32) 00:20:32.802 fused_ordering(33) 00:20:32.802 fused_ordering(34) 00:20:32.802 fused_ordering(35) 00:20:32.802 fused_ordering(36) 00:20:32.802 fused_ordering(37) 00:20:32.802 fused_ordering(38) 00:20:32.802 fused_ordering(39) 00:20:32.802 fused_ordering(40) 00:20:32.802 fused_ordering(41) 00:20:32.802 fused_ordering(42) 00:20:32.802 fused_ordering(43) 00:20:32.802 fused_ordering(44) 00:20:32.802 fused_ordering(45) 00:20:32.802 fused_ordering(46) 00:20:32.802 fused_ordering(47) 00:20:32.802 fused_ordering(48) 00:20:32.802 fused_ordering(49) 00:20:32.802 fused_ordering(50) 00:20:32.802 fused_ordering(51) 00:20:32.802 fused_ordering(52) 00:20:32.802 fused_ordering(53) 00:20:32.802 fused_ordering(54) 00:20:32.802 fused_ordering(55) 00:20:32.802 fused_ordering(56) 00:20:32.802 fused_ordering(57) 00:20:32.802 fused_ordering(58) 00:20:32.802 fused_ordering(59) 00:20:32.802 fused_ordering(60) 00:20:32.802 fused_ordering(61) 00:20:32.802 fused_ordering(62) 00:20:32.802 fused_ordering(63) 00:20:32.802 fused_ordering(64) 00:20:32.802 fused_ordering(65) 00:20:32.802 fused_ordering(66) 00:20:32.802 fused_ordering(67) 00:20:32.802 fused_ordering(68) 00:20:32.802 fused_ordering(69) 00:20:32.802 fused_ordering(70) 00:20:32.802 fused_ordering(71) 00:20:32.802 fused_ordering(72) 00:20:32.802 fused_ordering(73) 00:20:32.802 fused_ordering(74) 00:20:32.802 fused_ordering(75) 00:20:32.802 fused_ordering(76) 00:20:32.802 fused_ordering(77) 00:20:32.802 fused_ordering(78) 00:20:32.802 fused_ordering(79) 00:20:32.802 fused_ordering(80) 00:20:32.802 fused_ordering(81) 00:20:32.802 fused_ordering(82) 00:20:32.802 fused_ordering(83) 00:20:32.802 fused_ordering(84) 00:20:32.802 fused_ordering(85) 00:20:32.802 fused_ordering(86) 00:20:32.802 fused_ordering(87) 00:20:32.802 fused_ordering(88) 00:20:32.802 fused_ordering(89) 00:20:32.802 fused_ordering(90) 00:20:32.802 fused_ordering(91) 00:20:32.802 fused_ordering(92) 00:20:32.802 fused_ordering(93) 00:20:32.802 fused_ordering(94) 00:20:32.802 fused_ordering(95) 00:20:32.802 fused_ordering(96) 00:20:32.802 fused_ordering(97) 00:20:32.802 fused_ordering(98) 00:20:32.802 fused_ordering(99) 00:20:32.802 fused_ordering(100) 00:20:32.802 fused_ordering(101) 00:20:32.802 fused_ordering(102) 00:20:32.802 fused_ordering(103) 00:20:32.802 fused_ordering(104) 00:20:32.802 fused_ordering(105) 00:20:32.802 fused_ordering(106) 00:20:32.802 fused_ordering(107) 00:20:32.802 fused_ordering(108) 00:20:32.802 fused_ordering(109) 00:20:32.802 fused_ordering(110) 00:20:32.802 fused_ordering(111) 00:20:32.802 fused_ordering(112) 00:20:32.802 fused_ordering(113) 00:20:32.802 fused_ordering(114) 00:20:32.802 fused_ordering(115) 00:20:32.802 fused_ordering(116) 00:20:32.802 fused_ordering(117) 00:20:32.802 fused_ordering(118) 00:20:32.802 fused_ordering(119) 00:20:32.802 fused_ordering(120) 00:20:32.802 fused_ordering(121) 00:20:32.802 fused_ordering(122) 00:20:32.802 fused_ordering(123) 00:20:32.802 fused_ordering(124) 00:20:32.802 fused_ordering(125) 00:20:32.802 fused_ordering(126) 00:20:32.802 fused_ordering(127) 00:20:32.802 fused_ordering(128) 00:20:32.802 fused_ordering(129) 00:20:32.802 fused_ordering(130) 00:20:32.802 fused_ordering(131) 00:20:32.802 fused_ordering(132) 00:20:32.802 fused_ordering(133) 00:20:32.802 fused_ordering(134) 00:20:32.802 fused_ordering(135) 00:20:32.802 fused_ordering(136) 00:20:32.802 fused_ordering(137) 00:20:32.802 fused_ordering(138) 00:20:32.802 fused_ordering(139) 00:20:32.802 fused_ordering(140) 00:20:32.802 fused_ordering(141) 00:20:32.802 fused_ordering(142) 00:20:32.802 fused_ordering(143) 00:20:32.802 fused_ordering(144) 00:20:32.802 fused_ordering(145) 00:20:32.802 fused_ordering(146) 00:20:32.802 fused_ordering(147) 00:20:32.802 fused_ordering(148) 00:20:32.802 fused_ordering(149) 00:20:32.802 fused_ordering(150) 00:20:32.802 fused_ordering(151) 00:20:32.802 fused_ordering(152) 00:20:32.802 fused_ordering(153) 00:20:32.802 fused_ordering(154) 00:20:32.802 fused_ordering(155) 00:20:32.802 fused_ordering(156) 00:20:32.802 fused_ordering(157) 00:20:32.802 fused_ordering(158) 00:20:32.802 fused_ordering(159) 00:20:32.802 fused_ordering(160) 00:20:32.802 fused_ordering(161) 00:20:32.802 fused_ordering(162) 00:20:32.802 fused_ordering(163) 00:20:32.802 fused_ordering(164) 00:20:32.802 fused_ordering(165) 00:20:32.802 fused_ordering(166) 00:20:32.802 fused_ordering(167) 00:20:32.802 fused_ordering(168) 00:20:32.802 fused_ordering(169) 00:20:32.802 fused_ordering(170) 00:20:32.802 fused_ordering(171) 00:20:32.802 fused_ordering(172) 00:20:32.802 fused_ordering(173) 00:20:32.802 fused_ordering(174) 00:20:32.802 fused_ordering(175) 00:20:32.802 fused_ordering(176) 00:20:32.802 fused_ordering(177) 00:20:32.802 fused_ordering(178) 00:20:32.802 fused_ordering(179) 00:20:32.802 fused_ordering(180) 00:20:32.802 fused_ordering(181) 00:20:32.802 fused_ordering(182) 00:20:32.802 fused_ordering(183) 00:20:32.802 fused_ordering(184) 00:20:32.802 fused_ordering(185) 00:20:32.802 fused_ordering(186) 00:20:32.802 fused_ordering(187) 00:20:32.802 fused_ordering(188) 00:20:32.802 fused_ordering(189) 00:20:32.802 fused_ordering(190) 00:20:32.802 fused_ordering(191) 00:20:32.802 fused_ordering(192) 00:20:32.802 fused_ordering(193) 00:20:32.802 fused_ordering(194) 00:20:32.802 fused_ordering(195) 00:20:32.802 fused_ordering(196) 00:20:32.802 fused_ordering(197) 00:20:32.802 fused_ordering(198) 00:20:32.802 fused_ordering(199) 00:20:32.802 fused_ordering(200) 00:20:32.802 fused_ordering(201) 00:20:32.802 fused_ordering(202) 00:20:32.802 fused_ordering(203) 00:20:32.802 fused_ordering(204) 00:20:32.802 fused_ordering(205) 00:20:33.371 fused_ordering(206) 00:20:33.371 fused_ordering(207) 00:20:33.371 fused_ordering(208) 00:20:33.371 fused_ordering(209) 00:20:33.371 fused_ordering(210) 00:20:33.371 fused_ordering(211) 00:20:33.371 fused_ordering(212) 00:20:33.371 fused_ordering(213) 00:20:33.371 fused_ordering(214) 00:20:33.371 fused_ordering(215) 00:20:33.371 fused_ordering(216) 00:20:33.371 fused_ordering(217) 00:20:33.371 fused_ordering(218) 00:20:33.371 fused_ordering(219) 00:20:33.371 fused_ordering(220) 00:20:33.371 fused_ordering(221) 00:20:33.371 fused_ordering(222) 00:20:33.371 fused_ordering(223) 00:20:33.371 fused_ordering(224) 00:20:33.371 fused_ordering(225) 00:20:33.371 fused_ordering(226) 00:20:33.371 fused_ordering(227) 00:20:33.371 fused_ordering(228) 00:20:33.371 fused_ordering(229) 00:20:33.371 fused_ordering(230) 00:20:33.371 fused_ordering(231) 00:20:33.371 fused_ordering(232) 00:20:33.371 fused_ordering(233) 00:20:33.371 fused_ordering(234) 00:20:33.371 fused_ordering(235) 00:20:33.371 fused_ordering(236) 00:20:33.371 fused_ordering(237) 00:20:33.371 fused_ordering(238) 00:20:33.371 fused_ordering(239) 00:20:33.371 fused_ordering(240) 00:20:33.371 fused_ordering(241) 00:20:33.371 fused_ordering(242) 00:20:33.371 fused_ordering(243) 00:20:33.371 fused_ordering(244) 00:20:33.371 fused_ordering(245) 00:20:33.371 fused_ordering(246) 00:20:33.371 fused_ordering(247) 00:20:33.371 fused_ordering(248) 00:20:33.371 fused_ordering(249) 00:20:33.371 fused_ordering(250) 00:20:33.371 fused_ordering(251) 00:20:33.371 fused_ordering(252) 00:20:33.371 fused_ordering(253) 00:20:33.371 fused_ordering(254) 00:20:33.371 fused_ordering(255) 00:20:33.371 fused_ordering(256) 00:20:33.371 fused_ordering(257) 00:20:33.371 fused_ordering(258) 00:20:33.371 fused_ordering(259) 00:20:33.371 fused_ordering(260) 00:20:33.371 fused_ordering(261) 00:20:33.371 fused_ordering(262) 00:20:33.371 fused_ordering(263) 00:20:33.371 fused_ordering(264) 00:20:33.371 fused_ordering(265) 00:20:33.371 fused_ordering(266) 00:20:33.371 fused_ordering(267) 00:20:33.371 fused_ordering(268) 00:20:33.371 fused_ordering(269) 00:20:33.371 fused_ordering(270) 00:20:33.371 fused_ordering(271) 00:20:33.371 fused_ordering(272) 00:20:33.371 fused_ordering(273) 00:20:33.371 fused_ordering(274) 00:20:33.371 fused_ordering(275) 00:20:33.371 fused_ordering(276) 00:20:33.371 fused_ordering(277) 00:20:33.371 fused_ordering(278) 00:20:33.371 fused_ordering(279) 00:20:33.371 fused_ordering(280) 00:20:33.371 fused_ordering(281) 00:20:33.371 fused_ordering(282) 00:20:33.371 fused_ordering(283) 00:20:33.371 fused_ordering(284) 00:20:33.371 fused_ordering(285) 00:20:33.371 fused_ordering(286) 00:20:33.371 fused_ordering(287) 00:20:33.371 fused_ordering(288) 00:20:33.371 fused_ordering(289) 00:20:33.371 fused_ordering(290) 00:20:33.371 fused_ordering(291) 00:20:33.371 fused_ordering(292) 00:20:33.371 fused_ordering(293) 00:20:33.371 fused_ordering(294) 00:20:33.371 fused_ordering(295) 00:20:33.371 fused_ordering(296) 00:20:33.371 fused_ordering(297) 00:20:33.371 fused_ordering(298) 00:20:33.371 fused_ordering(299) 00:20:33.371 fused_ordering(300) 00:20:33.371 fused_ordering(301) 00:20:33.371 fused_ordering(302) 00:20:33.371 fused_ordering(303) 00:20:33.371 fused_ordering(304) 00:20:33.372 fused_ordering(305) 00:20:33.372 fused_ordering(306) 00:20:33.372 fused_ordering(307) 00:20:33.372 fused_ordering(308) 00:20:33.372 fused_ordering(309) 00:20:33.372 fused_ordering(310) 00:20:33.372 fused_ordering(311) 00:20:33.372 fused_ordering(312) 00:20:33.372 fused_ordering(313) 00:20:33.372 fused_ordering(314) 00:20:33.372 fused_ordering(315) 00:20:33.372 fused_ordering(316) 00:20:33.372 fused_ordering(317) 00:20:33.372 fused_ordering(318) 00:20:33.372 fused_ordering(319) 00:20:33.372 fused_ordering(320) 00:20:33.372 fused_ordering(321) 00:20:33.372 fused_ordering(322) 00:20:33.372 fused_ordering(323) 00:20:33.372 fused_ordering(324) 00:20:33.372 fused_ordering(325) 00:20:33.372 fused_ordering(326) 00:20:33.372 fused_ordering(327) 00:20:33.372 fused_ordering(328) 00:20:33.372 fused_ordering(329) 00:20:33.372 fused_ordering(330) 00:20:33.372 fused_ordering(331) 00:20:33.372 fused_ordering(332) 00:20:33.372 fused_ordering(333) 00:20:33.372 fused_ordering(334) 00:20:33.372 fused_ordering(335) 00:20:33.372 fused_ordering(336) 00:20:33.372 fused_ordering(337) 00:20:33.372 fused_ordering(338) 00:20:33.372 fused_ordering(339) 00:20:33.372 fused_ordering(340) 00:20:33.372 fused_ordering(341) 00:20:33.372 fused_ordering(342) 00:20:33.372 fused_ordering(343) 00:20:33.372 fused_ordering(344) 00:20:33.372 fused_ordering(345) 00:20:33.372 fused_ordering(346) 00:20:33.372 fused_ordering(347) 00:20:33.372 fused_ordering(348) 00:20:33.372 fused_ordering(349) 00:20:33.372 fused_ordering(350) 00:20:33.372 fused_ordering(351) 00:20:33.372 fused_ordering(352) 00:20:33.372 fused_ordering(353) 00:20:33.372 fused_ordering(354) 00:20:33.372 fused_ordering(355) 00:20:33.372 fused_ordering(356) 00:20:33.372 fused_ordering(357) 00:20:33.372 fused_ordering(358) 00:20:33.372 fused_ordering(359) 00:20:33.372 fused_ordering(360) 00:20:33.372 fused_ordering(361) 00:20:33.372 fused_ordering(362) 00:20:33.372 fused_ordering(363) 00:20:33.372 fused_ordering(364) 00:20:33.372 fused_ordering(365) 00:20:33.372 fused_ordering(366) 00:20:33.372 fused_ordering(367) 00:20:33.372 fused_ordering(368) 00:20:33.372 fused_ordering(369) 00:20:33.372 fused_ordering(370) 00:20:33.372 fused_ordering(371) 00:20:33.372 fused_ordering(372) 00:20:33.372 fused_ordering(373) 00:20:33.372 fused_ordering(374) 00:20:33.372 fused_ordering(375) 00:20:33.372 fused_ordering(376) 00:20:33.372 fused_ordering(377) 00:20:33.372 fused_ordering(378) 00:20:33.372 fused_ordering(379) 00:20:33.372 fused_ordering(380) 00:20:33.372 fused_ordering(381) 00:20:33.372 fused_ordering(382) 00:20:33.372 fused_ordering(383) 00:20:33.372 fused_ordering(384) 00:20:33.372 fused_ordering(385) 00:20:33.372 fused_ordering(386) 00:20:33.372 fused_ordering(387) 00:20:33.372 fused_ordering(388) 00:20:33.372 fused_ordering(389) 00:20:33.372 fused_ordering(390) 00:20:33.372 fused_ordering(391) 00:20:33.372 fused_ordering(392) 00:20:33.372 fused_ordering(393) 00:20:33.372 fused_ordering(394) 00:20:33.372 fused_ordering(395) 00:20:33.372 fused_ordering(396) 00:20:33.372 fused_ordering(397) 00:20:33.372 fused_ordering(398) 00:20:33.372 fused_ordering(399) 00:20:33.372 fused_ordering(400) 00:20:33.372 fused_ordering(401) 00:20:33.372 fused_ordering(402) 00:20:33.372 fused_ordering(403) 00:20:33.372 fused_ordering(404) 00:20:33.372 fused_ordering(405) 00:20:33.372 fused_ordering(406) 00:20:33.372 fused_ordering(407) 00:20:33.372 fused_ordering(408) 00:20:33.372 fused_ordering(409) 00:20:33.372 fused_ordering(410) 00:20:33.940 fused_ordering(411) 00:20:33.940 fused_ordering(412) 00:20:33.940 fused_ordering(413) 00:20:33.940 fused_ordering(414) 00:20:33.940 fused_ordering(415) 00:20:33.940 fused_ordering(416) 00:20:33.940 fused_ordering(417) 00:20:33.940 fused_ordering(418) 00:20:33.940 fused_ordering(419) 00:20:33.940 fused_ordering(420) 00:20:33.940 fused_ordering(421) 00:20:33.940 fused_ordering(422) 00:20:33.940 fused_ordering(423) 00:20:33.940 fused_ordering(424) 00:20:33.940 fused_ordering(425) 00:20:33.940 fused_ordering(426) 00:20:33.940 fused_ordering(427) 00:20:33.940 fused_ordering(428) 00:20:33.940 fused_ordering(429) 00:20:33.940 fused_ordering(430) 00:20:33.940 fused_ordering(431) 00:20:33.940 fused_ordering(432) 00:20:33.940 fused_ordering(433) 00:20:33.940 fused_ordering(434) 00:20:33.940 fused_ordering(435) 00:20:33.940 fused_ordering(436) 00:20:33.940 fused_ordering(437) 00:20:33.940 fused_ordering(438) 00:20:33.940 fused_ordering(439) 00:20:33.940 fused_ordering(440) 00:20:33.940 fused_ordering(441) 00:20:33.940 fused_ordering(442) 00:20:33.940 fused_ordering(443) 00:20:33.940 fused_ordering(444) 00:20:33.940 fused_ordering(445) 00:20:33.940 fused_ordering(446) 00:20:33.940 fused_ordering(447) 00:20:33.940 fused_ordering(448) 00:20:33.940 fused_ordering(449) 00:20:33.940 fused_ordering(450) 00:20:33.940 fused_ordering(451) 00:20:33.940 fused_ordering(452) 00:20:33.940 fused_ordering(453) 00:20:33.940 fused_ordering(454) 00:20:33.940 fused_ordering(455) 00:20:33.940 fused_ordering(456) 00:20:33.940 fused_ordering(457) 00:20:33.940 fused_ordering(458) 00:20:33.940 fused_ordering(459) 00:20:33.940 fused_ordering(460) 00:20:33.940 fused_ordering(461) 00:20:33.940 fused_ordering(462) 00:20:33.940 fused_ordering(463) 00:20:33.940 fused_ordering(464) 00:20:33.940 fused_ordering(465) 00:20:33.940 fused_ordering(466) 00:20:33.940 fused_ordering(467) 00:20:33.940 fused_ordering(468) 00:20:33.940 fused_ordering(469) 00:20:33.940 fused_ordering(470) 00:20:33.940 fused_ordering(471) 00:20:33.940 fused_ordering(472) 00:20:33.940 fused_ordering(473) 00:20:33.940 fused_ordering(474) 00:20:33.940 fused_ordering(475) 00:20:33.940 fused_ordering(476) 00:20:33.940 fused_ordering(477) 00:20:33.940 fused_ordering(478) 00:20:33.940 fused_ordering(479) 00:20:33.940 fused_ordering(480) 00:20:33.940 fused_ordering(481) 00:20:33.940 fused_ordering(482) 00:20:33.940 fused_ordering(483) 00:20:33.940 fused_ordering(484) 00:20:33.940 fused_ordering(485) 00:20:33.940 fused_ordering(486) 00:20:33.940 fused_ordering(487) 00:20:33.940 fused_ordering(488) 00:20:33.940 fused_ordering(489) 00:20:33.940 fused_ordering(490) 00:20:33.940 fused_ordering(491) 00:20:33.940 fused_ordering(492) 00:20:33.940 fused_ordering(493) 00:20:33.940 fused_ordering(494) 00:20:33.940 fused_ordering(495) 00:20:33.940 fused_ordering(496) 00:20:33.940 fused_ordering(497) 00:20:33.940 fused_ordering(498) 00:20:33.940 fused_ordering(499) 00:20:33.940 fused_ordering(500) 00:20:33.940 fused_ordering(501) 00:20:33.940 fused_ordering(502) 00:20:33.940 fused_ordering(503) 00:20:33.940 fused_ordering(504) 00:20:33.940 fused_ordering(505) 00:20:33.940 fused_ordering(506) 00:20:33.940 fused_ordering(507) 00:20:33.940 fused_ordering(508) 00:20:33.940 fused_ordering(509) 00:20:33.940 fused_ordering(510) 00:20:33.940 fused_ordering(511) 00:20:33.940 fused_ordering(512) 00:20:33.940 fused_ordering(513) 00:20:33.940 fused_ordering(514) 00:20:33.940 fused_ordering(515) 00:20:33.940 fused_ordering(516) 00:20:33.940 fused_ordering(517) 00:20:33.940 fused_ordering(518) 00:20:33.940 fused_ordering(519) 00:20:33.940 fused_ordering(520) 00:20:33.940 fused_ordering(521) 00:20:33.940 fused_ordering(522) 00:20:33.940 fused_ordering(523) 00:20:33.940 fused_ordering(524) 00:20:33.941 fused_ordering(525) 00:20:33.941 fused_ordering(526) 00:20:33.941 fused_ordering(527) 00:20:33.941 fused_ordering(528) 00:20:33.941 fused_ordering(529) 00:20:33.941 fused_ordering(530) 00:20:33.941 fused_ordering(531) 00:20:33.941 fused_ordering(532) 00:20:33.941 fused_ordering(533) 00:20:33.941 fused_ordering(534) 00:20:33.941 fused_ordering(535) 00:20:33.941 fused_ordering(536) 00:20:33.941 fused_ordering(537) 00:20:33.941 fused_ordering(538) 00:20:33.941 fused_ordering(539) 00:20:33.941 fused_ordering(540) 00:20:33.941 fused_ordering(541) 00:20:33.941 fused_ordering(542) 00:20:33.941 fused_ordering(543) 00:20:33.941 fused_ordering(544) 00:20:33.941 fused_ordering(545) 00:20:33.941 fused_ordering(546) 00:20:33.941 fused_ordering(547) 00:20:33.941 fused_ordering(548) 00:20:33.941 fused_ordering(549) 00:20:33.941 fused_ordering(550) 00:20:33.941 fused_ordering(551) 00:20:33.941 fused_ordering(552) 00:20:33.941 fused_ordering(553) 00:20:33.941 fused_ordering(554) 00:20:33.941 fused_ordering(555) 00:20:33.941 fused_ordering(556) 00:20:33.941 fused_ordering(557) 00:20:33.941 fused_ordering(558) 00:20:33.941 fused_ordering(559) 00:20:33.941 fused_ordering(560) 00:20:33.941 fused_ordering(561) 00:20:33.941 fused_ordering(562) 00:20:33.941 fused_ordering(563) 00:20:33.941 fused_ordering(564) 00:20:33.941 fused_ordering(565) 00:20:33.941 fused_ordering(566) 00:20:33.941 fused_ordering(567) 00:20:33.941 fused_ordering(568) 00:20:33.941 fused_ordering(569) 00:20:33.941 fused_ordering(570) 00:20:33.941 fused_ordering(571) 00:20:33.941 fused_ordering(572) 00:20:33.941 fused_ordering(573) 00:20:33.941 fused_ordering(574) 00:20:33.941 fused_ordering(575) 00:20:33.941 fused_ordering(576) 00:20:33.941 fused_ordering(577) 00:20:33.941 fused_ordering(578) 00:20:33.941 fused_ordering(579) 00:20:33.941 fused_ordering(580) 00:20:33.941 fused_ordering(581) 00:20:33.941 fused_ordering(582) 00:20:33.941 fused_ordering(583) 00:20:33.941 fused_ordering(584) 00:20:33.941 fused_ordering(585) 00:20:33.941 fused_ordering(586) 00:20:33.941 fused_ordering(587) 00:20:33.941 fused_ordering(588) 00:20:33.941 fused_ordering(589) 00:20:33.941 fused_ordering(590) 00:20:33.941 fused_ordering(591) 00:20:33.941 fused_ordering(592) 00:20:33.941 fused_ordering(593) 00:20:33.941 fused_ordering(594) 00:20:33.941 fused_ordering(595) 00:20:33.941 fused_ordering(596) 00:20:33.941 fused_ordering(597) 00:20:33.941 fused_ordering(598) 00:20:33.941 fused_ordering(599) 00:20:33.941 fused_ordering(600) 00:20:33.941 fused_ordering(601) 00:20:33.941 fused_ordering(602) 00:20:33.941 fused_ordering(603) 00:20:33.941 fused_ordering(604) 00:20:33.941 fused_ordering(605) 00:20:33.941 fused_ordering(606) 00:20:33.941 fused_ordering(607) 00:20:33.941 fused_ordering(608) 00:20:33.941 fused_ordering(609) 00:20:33.941 fused_ordering(610) 00:20:33.941 fused_ordering(611) 00:20:33.941 fused_ordering(612) 00:20:33.941 fused_ordering(613) 00:20:33.941 fused_ordering(614) 00:20:33.941 fused_ordering(615) 00:20:34.508 fused_ordering(616) 00:20:34.508 fused_ordering(617) 00:20:34.508 fused_ordering(618) 00:20:34.508 fused_ordering(619) 00:20:34.508 fused_ordering(620) 00:20:34.508 fused_ordering(621) 00:20:34.508 fused_ordering(622) 00:20:34.508 fused_ordering(623) 00:20:34.508 fused_ordering(624) 00:20:34.508 fused_ordering(625) 00:20:34.508 fused_ordering(626) 00:20:34.508 fused_ordering(627) 00:20:34.508 fused_ordering(628) 00:20:34.508 fused_ordering(629) 00:20:34.508 fused_ordering(630) 00:20:34.508 fused_ordering(631) 00:20:34.508 fused_ordering(632) 00:20:34.508 fused_ordering(633) 00:20:34.508 fused_ordering(634) 00:20:34.508 fused_ordering(635) 00:20:34.508 fused_ordering(636) 00:20:34.508 fused_ordering(637) 00:20:34.508 fused_ordering(638) 00:20:34.508 fused_ordering(639) 00:20:34.508 fused_ordering(640) 00:20:34.508 fused_ordering(641) 00:20:34.508 fused_ordering(642) 00:20:34.508 fused_ordering(643) 00:20:34.508 fused_ordering(644) 00:20:34.508 fused_ordering(645) 00:20:34.508 fused_ordering(646) 00:20:34.508 fused_ordering(647) 00:20:34.508 fused_ordering(648) 00:20:34.508 fused_ordering(649) 00:20:34.508 fused_ordering(650) 00:20:34.508 fused_ordering(651) 00:20:34.508 fused_ordering(652) 00:20:34.508 fused_ordering(653) 00:20:34.508 fused_ordering(654) 00:20:34.508 fused_ordering(655) 00:20:34.508 fused_ordering(656) 00:20:34.508 fused_ordering(657) 00:20:34.508 fused_ordering(658) 00:20:34.508 fused_ordering(659) 00:20:34.508 fused_ordering(660) 00:20:34.508 fused_ordering(661) 00:20:34.508 fused_ordering(662) 00:20:34.508 fused_ordering(663) 00:20:34.508 fused_ordering(664) 00:20:34.508 fused_ordering(665) 00:20:34.508 fused_ordering(666) 00:20:34.508 fused_ordering(667) 00:20:34.508 fused_ordering(668) 00:20:34.508 fused_ordering(669) 00:20:34.508 fused_ordering(670) 00:20:34.508 fused_ordering(671) 00:20:34.508 fused_ordering(672) 00:20:34.508 fused_ordering(673) 00:20:34.508 fused_ordering(674) 00:20:34.508 fused_ordering(675) 00:20:34.508 fused_ordering(676) 00:20:34.508 fused_ordering(677) 00:20:34.508 fused_ordering(678) 00:20:34.508 fused_ordering(679) 00:20:34.508 fused_ordering(680) 00:20:34.508 fused_ordering(681) 00:20:34.508 fused_ordering(682) 00:20:34.508 fused_ordering(683) 00:20:34.508 fused_ordering(684) 00:20:34.508 fused_ordering(685) 00:20:34.508 fused_ordering(686) 00:20:34.508 fused_ordering(687) 00:20:34.508 fused_ordering(688) 00:20:34.508 fused_ordering(689) 00:20:34.508 fused_ordering(690) 00:20:34.508 fused_ordering(691) 00:20:34.508 fused_ordering(692) 00:20:34.508 fused_ordering(693) 00:20:34.508 fused_ordering(694) 00:20:34.508 fused_ordering(695) 00:20:34.508 fused_ordering(696) 00:20:34.508 fused_ordering(697) 00:20:34.508 fused_ordering(698) 00:20:34.508 fused_ordering(699) 00:20:34.508 fused_ordering(700) 00:20:34.508 fused_ordering(701) 00:20:34.508 fused_ordering(702) 00:20:34.508 fused_ordering(703) 00:20:34.508 fused_ordering(704) 00:20:34.508 fused_ordering(705) 00:20:34.508 fused_ordering(706) 00:20:34.508 fused_ordering(707) 00:20:34.508 fused_ordering(708) 00:20:34.508 fused_ordering(709) 00:20:34.508 fused_ordering(710) 00:20:34.508 fused_ordering(711) 00:20:34.508 fused_ordering(712) 00:20:34.508 fused_ordering(713) 00:20:34.508 fused_ordering(714) 00:20:34.508 fused_ordering(715) 00:20:34.508 fused_ordering(716) 00:20:34.508 fused_ordering(717) 00:20:34.508 fused_ordering(718) 00:20:34.508 fused_ordering(719) 00:20:34.508 fused_ordering(720) 00:20:34.508 fused_ordering(721) 00:20:34.509 fused_ordering(722) 00:20:34.509 fused_ordering(723) 00:20:34.509 fused_ordering(724) 00:20:34.509 fused_ordering(725) 00:20:34.509 fused_ordering(726) 00:20:34.509 fused_ordering(727) 00:20:34.509 fused_ordering(728) 00:20:34.509 fused_ordering(729) 00:20:34.509 fused_ordering(730) 00:20:34.509 fused_ordering(731) 00:20:34.509 fused_ordering(732) 00:20:34.509 fused_ordering(733) 00:20:34.509 fused_ordering(734) 00:20:34.509 fused_ordering(735) 00:20:34.509 fused_ordering(736) 00:20:34.509 fused_ordering(737) 00:20:34.509 fused_ordering(738) 00:20:34.509 fused_ordering(739) 00:20:34.509 fused_ordering(740) 00:20:34.509 fused_ordering(741) 00:20:34.509 fused_ordering(742) 00:20:34.509 fused_ordering(743) 00:20:34.509 fused_ordering(744) 00:20:34.509 fused_ordering(745) 00:20:34.509 fused_ordering(746) 00:20:34.509 fused_ordering(747) 00:20:34.509 fused_ordering(748) 00:20:34.509 fused_ordering(749) 00:20:34.509 fused_ordering(750) 00:20:34.509 fused_ordering(751) 00:20:34.509 fused_ordering(752) 00:20:34.509 fused_ordering(753) 00:20:34.509 fused_ordering(754) 00:20:34.509 fused_ordering(755) 00:20:34.509 fused_ordering(756) 00:20:34.509 fused_ordering(757) 00:20:34.509 fused_ordering(758) 00:20:34.509 fused_ordering(759) 00:20:34.509 fused_ordering(760) 00:20:34.509 fused_ordering(761) 00:20:34.509 fused_ordering(762) 00:20:34.509 fused_ordering(763) 00:20:34.509 fused_ordering(764) 00:20:34.509 fused_ordering(765) 00:20:34.509 fused_ordering(766) 00:20:34.509 fused_ordering(767) 00:20:34.509 fused_ordering(768) 00:20:34.509 fused_ordering(769) 00:20:34.509 fused_ordering(770) 00:20:34.509 fused_ordering(771) 00:20:34.509 fused_ordering(772) 00:20:34.509 fused_ordering(773) 00:20:34.509 fused_ordering(774) 00:20:34.509 fused_ordering(775) 00:20:34.509 fused_ordering(776) 00:20:34.509 fused_ordering(777) 00:20:34.509 fused_ordering(778) 00:20:34.509 fused_ordering(779) 00:20:34.509 fused_ordering(780) 00:20:34.509 fused_ordering(781) 00:20:34.509 fused_ordering(782) 00:20:34.509 fused_ordering(783) 00:20:34.509 fused_ordering(784) 00:20:34.509 fused_ordering(785) 00:20:34.509 fused_ordering(786) 00:20:34.509 fused_ordering(787) 00:20:34.509 fused_ordering(788) 00:20:34.509 fused_ordering(789) 00:20:34.509 fused_ordering(790) 00:20:34.509 fused_ordering(791) 00:20:34.509 fused_ordering(792) 00:20:34.509 fused_ordering(793) 00:20:34.509 fused_ordering(794) 00:20:34.509 fused_ordering(795) 00:20:34.509 fused_ordering(796) 00:20:34.509 fused_ordering(797) 00:20:34.509 fused_ordering(798) 00:20:34.509 fused_ordering(799) 00:20:34.509 fused_ordering(800) 00:20:34.509 fused_ordering(801) 00:20:34.509 fused_ordering(802) 00:20:34.509 fused_ordering(803) 00:20:34.509 fused_ordering(804) 00:20:34.509 fused_ordering(805) 00:20:34.509 fused_ordering(806) 00:20:34.509 fused_ordering(807) 00:20:34.509 fused_ordering(808) 00:20:34.509 fused_ordering(809) 00:20:34.509 fused_ordering(810) 00:20:34.509 fused_ordering(811) 00:20:34.509 fused_ordering(812) 00:20:34.509 fused_ordering(813) 00:20:34.509 fused_ordering(814) 00:20:34.509 fused_ordering(815) 00:20:34.509 fused_ordering(816) 00:20:34.509 fused_ordering(817) 00:20:34.509 fused_ordering(818) 00:20:34.509 fused_ordering(819) 00:20:34.509 fused_ordering(820) 00:20:35.447 fused_ordering(821) 00:20:35.447 fused_ordering(822) 00:20:35.447 fused_ordering(823) 00:20:35.447 fused_ordering(824) 00:20:35.447 fused_ordering(825) 00:20:35.447 fused_ordering(826) 00:20:35.447 fused_ordering(827) 00:20:35.447 fused_ordering(828) 00:20:35.447 fused_ordering(829) 00:20:35.447 fused_ordering(830) 00:20:35.447 fused_ordering(831) 00:20:35.447 fused_ordering(832) 00:20:35.447 fused_ordering(833) 00:20:35.447 fused_ordering(834) 00:20:35.447 fused_ordering(835) 00:20:35.447 fused_ordering(836) 00:20:35.447 fused_ordering(837) 00:20:35.447 fused_ordering(838) 00:20:35.447 fused_ordering(839) 00:20:35.447 fused_ordering(840) 00:20:35.447 fused_ordering(841) 00:20:35.447 fused_ordering(842) 00:20:35.447 fused_ordering(843) 00:20:35.447 fused_ordering(844) 00:20:35.447 fused_ordering(845) 00:20:35.447 fused_ordering(846) 00:20:35.447 fused_ordering(847) 00:20:35.447 fused_ordering(848) 00:20:35.447 fused_ordering(849) 00:20:35.447 fused_ordering(850) 00:20:35.447 fused_ordering(851) 00:20:35.447 fused_ordering(852) 00:20:35.447 fused_ordering(853) 00:20:35.447 fused_ordering(854) 00:20:35.447 fused_ordering(855) 00:20:35.447 fused_ordering(856) 00:20:35.447 fused_ordering(857) 00:20:35.447 fused_ordering(858) 00:20:35.447 fused_ordering(859) 00:20:35.447 fused_ordering(860) 00:20:35.447 fused_ordering(861) 00:20:35.447 fused_ordering(862) 00:20:35.447 fused_ordering(863) 00:20:35.447 fused_ordering(864) 00:20:35.447 fused_ordering(865) 00:20:35.447 fused_ordering(866) 00:20:35.447 fused_ordering(867) 00:20:35.447 fused_ordering(868) 00:20:35.447 fused_ordering(869) 00:20:35.447 fused_ordering(870) 00:20:35.447 fused_ordering(871) 00:20:35.447 fused_ordering(872) 00:20:35.447 fused_ordering(873) 00:20:35.447 fused_ordering(874) 00:20:35.447 fused_ordering(875) 00:20:35.447 fused_ordering(876) 00:20:35.447 fused_ordering(877) 00:20:35.447 fused_ordering(878) 00:20:35.447 fused_ordering(879) 00:20:35.447 fused_ordering(880) 00:20:35.447 fused_ordering(881) 00:20:35.447 fused_ordering(882) 00:20:35.447 fused_ordering(883) 00:20:35.447 fused_ordering(884) 00:20:35.447 fused_ordering(885) 00:20:35.447 fused_ordering(886) 00:20:35.447 fused_ordering(887) 00:20:35.447 fused_ordering(888) 00:20:35.447 fused_ordering(889) 00:20:35.447 fused_ordering(890) 00:20:35.447 fused_ordering(891) 00:20:35.447 fused_ordering(892) 00:20:35.447 fused_ordering(893) 00:20:35.447 fused_ordering(894) 00:20:35.447 fused_ordering(895) 00:20:35.447 fused_ordering(896) 00:20:35.447 fused_ordering(897) 00:20:35.447 fused_ordering(898) 00:20:35.447 fused_ordering(899) 00:20:35.447 fused_ordering(900) 00:20:35.447 fused_ordering(901) 00:20:35.447 fused_ordering(902) 00:20:35.447 fused_ordering(903) 00:20:35.447 fused_ordering(904) 00:20:35.447 fused_ordering(905) 00:20:35.447 fused_ordering(906) 00:20:35.447 fused_ordering(907) 00:20:35.447 fused_ordering(908) 00:20:35.447 fused_ordering(909) 00:20:35.447 fused_ordering(910) 00:20:35.447 fused_ordering(911) 00:20:35.447 fused_ordering(912) 00:20:35.447 fused_ordering(913) 00:20:35.447 fused_ordering(914) 00:20:35.447 fused_ordering(915) 00:20:35.447 fused_ordering(916) 00:20:35.447 fused_ordering(917) 00:20:35.447 fused_ordering(918) 00:20:35.447 fused_ordering(919) 00:20:35.447 fused_ordering(920) 00:20:35.447 fused_ordering(921) 00:20:35.447 fused_ordering(922) 00:20:35.447 fused_ordering(923) 00:20:35.447 fused_ordering(924) 00:20:35.447 fused_ordering(925) 00:20:35.447 fused_ordering(926) 00:20:35.447 fused_ordering(927) 00:20:35.447 fused_ordering(928) 00:20:35.447 fused_ordering(929) 00:20:35.447 fused_ordering(930) 00:20:35.447 fused_ordering(931) 00:20:35.447 fused_ordering(932) 00:20:35.447 fused_ordering(933) 00:20:35.447 fused_ordering(934) 00:20:35.447 fused_ordering(935) 00:20:35.447 fused_ordering(936) 00:20:35.447 fused_ordering(937) 00:20:35.447 fused_ordering(938) 00:20:35.447 fused_ordering(939) 00:20:35.447 fused_ordering(940) 00:20:35.447 fused_ordering(941) 00:20:35.447 fused_ordering(942) 00:20:35.447 fused_ordering(943) 00:20:35.447 fused_ordering(944) 00:20:35.447 fused_ordering(945) 00:20:35.447 fused_ordering(946) 00:20:35.447 fused_ordering(947) 00:20:35.447 fused_ordering(948) 00:20:35.447 fused_ordering(949) 00:20:35.447 fused_ordering(950) 00:20:35.447 fused_ordering(951) 00:20:35.447 fused_ordering(952) 00:20:35.447 fused_ordering(953) 00:20:35.447 fused_ordering(954) 00:20:35.447 fused_ordering(955) 00:20:35.447 fused_ordering(956) 00:20:35.447 fused_ordering(957) 00:20:35.447 fused_ordering(958) 00:20:35.447 fused_ordering(959) 00:20:35.447 fused_ordering(960) 00:20:35.447 fused_ordering(961) 00:20:35.447 fused_ordering(962) 00:20:35.447 fused_ordering(963) 00:20:35.447 fused_ordering(964) 00:20:35.447 fused_ordering(965) 00:20:35.447 fused_ordering(966) 00:20:35.447 fused_ordering(967) 00:20:35.447 fused_ordering(968) 00:20:35.447 fused_ordering(969) 00:20:35.447 fused_ordering(970) 00:20:35.447 fused_ordering(971) 00:20:35.447 fused_ordering(972) 00:20:35.447 fused_ordering(973) 00:20:35.447 fused_ordering(974) 00:20:35.447 fused_ordering(975) 00:20:35.447 fused_ordering(976) 00:20:35.447 fused_ordering(977) 00:20:35.447 fused_ordering(978) 00:20:35.447 fused_ordering(979) 00:20:35.447 fused_ordering(980) 00:20:35.447 fused_ordering(981) 00:20:35.447 fused_ordering(982) 00:20:35.447 fused_ordering(983) 00:20:35.447 fused_ordering(984) 00:20:35.447 fused_ordering(985) 00:20:35.447 fused_ordering(986) 00:20:35.447 fused_ordering(987) 00:20:35.447 fused_ordering(988) 00:20:35.447 fused_ordering(989) 00:20:35.447 fused_ordering(990) 00:20:35.447 fused_ordering(991) 00:20:35.447 fused_ordering(992) 00:20:35.447 fused_ordering(993) 00:20:35.447 fused_ordering(994) 00:20:35.447 fused_ordering(995) 00:20:35.447 fused_ordering(996) 00:20:35.447 fused_ordering(997) 00:20:35.447 fused_ordering(998) 00:20:35.447 fused_ordering(999) 00:20:35.447 fused_ordering(1000) 00:20:35.447 fused_ordering(1001) 00:20:35.447 fused_ordering(1002) 00:20:35.447 fused_ordering(1003) 00:20:35.447 fused_ordering(1004) 00:20:35.447 fused_ordering(1005) 00:20:35.447 fused_ordering(1006) 00:20:35.447 fused_ordering(1007) 00:20:35.447 fused_ordering(1008) 00:20:35.447 fused_ordering(1009) 00:20:35.447 fused_ordering(1010) 00:20:35.447 fused_ordering(1011) 00:20:35.447 fused_ordering(1012) 00:20:35.447 fused_ordering(1013) 00:20:35.447 fused_ordering(1014) 00:20:35.447 fused_ordering(1015) 00:20:35.447 fused_ordering(1016) 00:20:35.447 fused_ordering(1017) 00:20:35.447 fused_ordering(1018) 00:20:35.447 fused_ordering(1019) 00:20:35.447 fused_ordering(1020) 00:20:35.447 fused_ordering(1021) 00:20:35.447 fused_ordering(1022) 00:20:35.447 fused_ordering(1023) 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:35.448 rmmod nvme_tcp 00:20:35.448 rmmod nvme_fabrics 00:20:35.448 rmmod nvme_keyring 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3871886 ']' 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3871886 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 3871886 ']' 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 3871886 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3871886 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3871886' 00:20:35.448 killing process with pid 3871886 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 3871886 00:20:35.448 11:29:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 3871886 00:20:35.708 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:35.708 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:35.708 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:35.708 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.708 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:35.708 11:29:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.708 11:29:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.708 11:29:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.614 11:29:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:37.614 00:20:37.614 real 0m15.229s 00:20:37.614 user 0m7.864s 00:20:37.614 sys 0m9.000s 00:20:37.614 11:29:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:37.614 11:29:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:20:37.614 ************************************ 00:20:37.614 END TEST nvmf_fused_ordering 00:20:37.614 ************************************ 00:20:37.873 11:29:02 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:37.873 11:29:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:37.873 11:29:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:37.873 11:29:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:37.873 ************************************ 00:20:37.873 START TEST nvmf_delete_subsystem 00:20:37.873 ************************************ 00:20:37.873 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:20:37.873 * Looking for test storage... 00:20:37.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:37.873 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.873 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.874 11:29:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:47.860 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.860 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:20:47.860 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:47.861 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:47.861 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:47.861 Found net devices under 0000:af:00.0: cvl_0_0 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:47.861 Found net devices under 0000:af:00.1: cvl_0_1 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:47.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:20:47.861 00:20:47.861 --- 10.0.0.2 ping statistics --- 00:20:47.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.861 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:20:47.861 00:20:47.861 --- 10.0.0.1 ping statistics --- 00:20:47.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.861 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3877124 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3877124 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 3877124 ']' 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:47.861 11:29:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:47.861 [2024-06-10 11:29:11.574107] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:20:47.861 [2024-06-10 11:29:11.574168] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.861 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.861 [2024-06-10 11:29:11.700996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:47.861 [2024-06-10 11:29:11.785772] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.861 [2024-06-10 11:29:11.785820] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.861 [2024-06-10 11:29:11.785840] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.861 [2024-06-10 11:29:11.785855] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.861 [2024-06-10 11:29:11.785869] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.861 [2024-06-10 11:29:11.785980] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.861 [2024-06-10 11:29:11.785986] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:47.861 [2024-06-10 11:29:12.519115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:47.861 [2024-06-10 11:29:12.535329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:20:47.861 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.862 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:47.862 NULL1 00:20:47.862 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.862 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:47.862 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.862 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:47.862 Delay0 00:20:47.862 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.862 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:47.862 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.862 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:47.862 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.862 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3877404 00:20:47.862 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:47.862 11:29:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:20:47.862 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.862 [2024-06-10 11:29:12.619947] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:49.849 11:29:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:49.849 11:29:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.849 11:29:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 [2024-06-10 11:29:14.750256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154e250 is same with the state(5) to be set 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 starting I/O failed: -6 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.849 Write completed with error (sct=0, sc=8) 00:20:49.849 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 starting I/O failed: -6 00:20:49.850 [2024-06-10 11:29:14.751300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3a98000c00 is same with the state(5) to be set 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Write completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Write completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Write completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Write completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Write completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Write completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Write completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Write completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Write completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Write completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Write completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:49.850 Write completed with error (sct=0, sc=8) 00:20:49.850 Read completed with error (sct=0, sc=8) 00:20:50.787 [2024-06-10 11:29:15.717956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156f1a0 is same with the state(5) to be set 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 [2024-06-10 11:29:15.750632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154e070 is same with the state(5) to be set 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 [2024-06-10 11:29:15.754086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3a9800c780 is same with the state(5) to be set 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 [2024-06-10 11:29:15.754252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3a9800bfe0 is same with the state(5) to be set 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Read completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.787 Write completed with error (sct=0, sc=8) 00:20:50.788 Write completed with error (sct=0, sc=8) 00:20:50.788 Read completed with error (sct=0, sc=8) 00:20:50.788 [2024-06-10 11:29:15.754777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156fc30 is same with the state(5) to be set 00:20:50.788 Initializing NVMe Controllers 00:20:50.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.788 Controller IO queue size 128, less than required. 00:20:50.788 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:50.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:50.788 Initialization complete. Launching workers. 00:20:50.788 ======================================================== 00:20:50.788 Latency(us) 00:20:50.788 Device Information : IOPS MiB/s Average min max 00:20:50.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.30 0.08 889806.33 319.36 1012785.93 00:20:50.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.32 0.08 896797.25 395.58 1013212.45 00:20:50.788 ======================================================== 00:20:50.788 Total : 341.62 0.17 893271.31 319.36 1013212.45 00:20:50.788 00:20:50.788 [2024-06-10 11:29:15.755108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156f1a0 (9): Bad file descriptor 00:20:50.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:50.788 11:29:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.788 11:29:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:20:50.788 11:29:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3877404 00:20:50.788 11:29:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:20:51.355 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:20:51.355 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3877404 00:20:51.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3877404) - No such process 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3877404 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 3877404 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 3877404 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:51.356 [2024-06-10 11:29:16.283691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3877951 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3877951 00:20:51.356 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:51.356 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.356 [2024-06-10 11:29:16.352258] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:51.923 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:51.923 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3877951 00:20:51.923 11:29:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:52.491 11:29:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:52.491 11:29:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3877951 00:20:52.491 11:29:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:52.749 11:29:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:52.749 11:29:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3877951 00:20:52.749 11:29:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:53.317 11:29:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:53.317 11:29:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3877951 00:20:53.317 11:29:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:53.885 11:29:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:53.885 11:29:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3877951 00:20:53.885 11:29:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:54.453 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:54.453 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3877951 00:20:54.453 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:54.453 Initializing NVMe Controllers 00:20:54.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:54.453 Controller IO queue size 128, less than required. 00:20:54.453 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:54.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:54.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:54.453 Initialization complete. Launching workers. 00:20:54.453 ======================================================== 00:20:54.453 Latency(us) 00:20:54.453 Device Information : IOPS MiB/s Average min max 00:20:54.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002862.47 1000223.47 1011510.41 00:20:54.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004911.45 1000270.07 1041380.92 00:20:54.453 ======================================================== 00:20:54.453 Total : 256.00 0.12 1003886.96 1000223.47 1041380.92 00:20:54.453 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3877951 00:20:55.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3877951) - No such process 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3877951 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:55.021 rmmod nvme_tcp 00:20:55.021 rmmod nvme_fabrics 00:20:55.021 rmmod nvme_keyring 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3877124 ']' 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3877124 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 3877124 ']' 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 3877124 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3877124 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3877124' 00:20:55.021 killing process with pid 3877124 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 3877124 00:20:55.021 11:29:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 3877124 00:20:55.279 11:29:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:55.279 11:29:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:55.279 11:29:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:55.279 11:29:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:55.279 11:29:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:55.279 11:29:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.279 11:29:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.279 11:29:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.185 11:29:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:57.185 00:20:57.185 real 0m19.468s 00:20:57.185 user 0m30.517s 00:20:57.185 sys 0m8.321s 00:20:57.186 11:29:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:57.186 11:29:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:20:57.186 ************************************ 00:20:57.186 END TEST nvmf_delete_subsystem 00:20:57.186 ************************************ 00:20:57.445 11:29:22 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:20:57.445 11:29:22 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:57.445 11:29:22 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:57.445 11:29:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:57.445 ************************************ 00:20:57.445 START TEST nvmf_ns_masking 00:20:57.445 ************************************ 00:20:57.445 11:29:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:20:57.445 * Looking for test storage... 00:20:57.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:57.445 11:29:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.445 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:20:57.445 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.445 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.445 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.445 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.445 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.445 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.445 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.445 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=8f82c784-d997-4b8f-a724-f0e1a12e9b54 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:20:57.446 11:29:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.429 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:07.430 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:07.430 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:07.430 Found net devices under 0000:af:00.0: cvl_0_0 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:07.430 Found net devices under 0000:af:00.1: cvl_0_1 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.430 11:29:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:07.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:21:07.430 00:21:07.430 --- 10.0.0.2 ping statistics --- 00:21:07.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.430 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:21:07.430 00:21:07.430 --- 10.0.0.1 ping statistics --- 00:21:07.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.430 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3883135 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3883135 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 3883135 ']' 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:07.430 11:29:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:07.430 [2024-06-10 11:29:31.391027] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:21:07.430 [2024-06-10 11:29:31.391090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.430 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.430 [2024-06-10 11:29:31.518762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.430 [2024-06-10 11:29:31.603407] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.430 [2024-06-10 11:29:31.603453] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.430 [2024-06-10 11:29:31.603472] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.431 [2024-06-10 11:29:31.603487] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.431 [2024-06-10 11:29:31.603501] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.431 [2024-06-10 11:29:31.603566] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.431 [2024-06-10 11:29:31.603665] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.431 [2024-06-10 11:29:31.603715] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.431 [2024-06-10 11:29:31.603712] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.431 11:29:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:07.431 11:29:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:21:07.431 11:29:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:07.431 11:29:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:07.431 11:29:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:07.431 11:29:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.431 11:29:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:07.431 [2024-06-10 11:29:32.495366] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.431 11:29:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:21:07.431 11:29:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:21:07.431 11:29:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:07.690 Malloc1 00:21:07.690 11:29:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:07.949 Malloc2 00:21:07.949 11:29:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:08.208 11:29:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:21:08.467 11:29:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.467 [2024-06-10 11:29:33.545007] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.727 11:29:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:21:08.727 11:29:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8f82c784-d997-4b8f-a724-f0e1a12e9b54 -a 10.0.0.2 -s 4420 -i 4 00:21:08.727 11:29:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:21:08.727 11:29:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:21:08.727 11:29:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:21:08.727 11:29:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:21:08.727 11:29:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:21:11.262 11:29:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:21:11.262 11:29:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:11.262 11:29:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:21:11.262 11:29:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:21:11.263 [ 0]:0x1 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=68650e239df34f27aeb623e3e08d732d 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 68650e239df34f27aeb623e3e08d732d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:11.263 11:29:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:21:11.263 [ 0]:0x1 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=68650e239df34f27aeb623e3e08d732d 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 68650e239df34f27aeb623e3e08d732d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:21:11.263 [ 1]:0x2 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b4d75dda510146719c43546d3713cd35 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b4d75dda510146719c43546d3713cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:11.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:11.263 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:11.522 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:21:11.780 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:21:11.781 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8f82c784-d997-4b8f-a724-f0e1a12e9b54 -a 10.0.0.2 -s 4420 -i 4 00:21:12.040 11:29:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:21:12.040 11:29:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:21:12.040 11:29:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:21:12.040 11:29:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:21:12.040 11:29:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:21:12.040 11:29:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:21:13.946 11:29:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:21:13.946 11:29:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:13.946 11:29:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:21:13.946 11:29:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:21:13.946 11:29:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:21:13.946 11:29:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:21:13.946 11:29:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:21:13.946 11:29:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:13.946 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:21:14.205 [ 0]:0x2 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b4d75dda510146719c43546d3713cd35 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b4d75dda510146719c43546d3713cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:14.205 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:21:14.465 [ 0]:0x1 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=68650e239df34f27aeb623e3e08d732d 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 68650e239df34f27aeb623e3e08d732d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:21:14.465 [ 1]:0x2 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b4d75dda510146719c43546d3713cd35 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b4d75dda510146719c43546d3713cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:14.465 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:14.724 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:21:14.724 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:21:14.724 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:21:14.724 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:21:14.725 [ 0]:0x2 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b4d75dda510146719c43546d3713cd35 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b4d75dda510146719c43546d3713cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:21:14.725 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:14.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:14.983 11:29:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:14.983 11:29:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:21:14.983 11:29:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8f82c784-d997-4b8f-a724-f0e1a12e9b54 -a 10.0.0.2 -s 4420 -i 4 00:21:15.242 11:29:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:15.242 11:29:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:21:15.242 11:29:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:21:15.242 11:29:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:21:15.242 11:29:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:21:15.242 11:29:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:21:17.148 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:21:17.148 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:17.148 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:21:17.407 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:21:17.407 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:21:17.408 [ 0]:0x1 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=68650e239df34f27aeb623e3e08d732d 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 68650e239df34f27aeb623e3e08d732d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:21:17.408 [ 1]:0x2 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b4d75dda510146719c43546d3713cd35 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b4d75dda510146719c43546d3713cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:17.408 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:21:17.668 [ 0]:0x2 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b4d75dda510146719c43546d3713cd35 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b4d75dda510146719c43546d3713cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:21:17.668 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:17.928 [2024-06-10 11:29:42.861770] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:21:17.928 request: 00:21:17.928 { 00:21:17.928 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.928 "nsid": 2, 00:21:17.928 "host": "nqn.2016-06.io.spdk:host1", 00:21:17.928 "method": "nvmf_ns_remove_host", 00:21:17.928 "req_id": 1 00:21:17.928 } 00:21:17.928 Got JSON-RPC error response 00:21:17.928 response: 00:21:17.928 { 00:21:17.928 "code": -32602, 00:21:17.928 "message": "Invalid parameters" 00:21:17.928 } 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:17.928 [ 0]:0x2 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:17.928 11:29:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:17.928 11:29:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b4d75dda510146719c43546d3713cd35 00:21:17.928 11:29:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b4d75dda510146719c43546d3713cd35 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:17.928 11:29:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:21:17.928 11:29:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:18.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:18.187 11:29:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.187 11:29:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:18.187 11:29:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:21:18.187 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:18.187 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:21:18.187 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:18.187 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:21:18.187 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.187 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:18.187 rmmod nvme_tcp 00:21:18.187 rmmod nvme_fabrics 00:21:18.187 rmmod nvme_keyring 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3883135 ']' 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3883135 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 3883135 ']' 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 3883135 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3883135 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3883135' 00:21:18.447 killing process with pid 3883135 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 3883135 00:21:18.447 11:29:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 3883135 00:21:18.707 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.707 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:18.707 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:18.707 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.707 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:18.707 11:29:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.707 11:29:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.707 11:29:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.614 11:29:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:20.614 00:21:20.614 real 0m23.354s 00:21:20.614 user 0m52.267s 00:21:20.614 sys 0m9.509s 00:21:20.614 11:29:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:20.614 11:29:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:20.614 ************************************ 00:21:20.614 END TEST nvmf_ns_masking 00:21:20.614 ************************************ 00:21:20.874 11:29:45 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:21:20.874 11:29:45 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:21:20.874 11:29:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:20.874 11:29:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:20.874 11:29:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:20.874 ************************************ 00:21:20.874 START TEST nvmf_nvme_cli 00:21:20.874 ************************************ 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:21:20.874 * Looking for test storage... 00:21:20.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:21:20.874 11:29:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:30.930 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.930 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:30.931 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:30.931 Found net devices under 0000:af:00.0: cvl_0_0 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:30.931 Found net devices under 0000:af:00.1: cvl_0_1 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:30.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:21:30.931 00:21:30.931 --- 10.0.0.2 ping statistics --- 00:21:30.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.931 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:21:30.931 00:21:30.931 --- 10.0.0.1 ping statistics --- 00:21:30.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.931 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3889654 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3889654 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 3889654 ']' 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:30.931 11:29:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:30.931 [2024-06-10 11:29:54.611256] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:21:30.931 [2024-06-10 11:29:54.611316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.931 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.931 [2024-06-10 11:29:54.740654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:30.931 [2024-06-10 11:29:54.831007] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.931 [2024-06-10 11:29:54.831055] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.931 [2024-06-10 11:29:54.831075] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.931 [2024-06-10 11:29:54.831090] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.931 [2024-06-10 11:29:54.831103] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.931 [2024-06-10 11:29:54.831170] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.931 [2024-06-10 11:29:54.831191] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.931 [2024-06-10 11:29:54.831310] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.931 [2024-06-10 11:29:54.831312] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:30.931 [2024-06-10 11:29:55.566070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:30.931 Malloc0 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:30.931 Malloc1 00:21:30.931 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:30.932 [2024-06-10 11:29:55.652604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:21:30.932 00:21:30.932 Discovery Log Number of Records 2, Generation counter 2 00:21:30.932 =====Discovery Log Entry 0====== 00:21:30.932 trtype: tcp 00:21:30.932 adrfam: ipv4 00:21:30.932 subtype: current discovery subsystem 00:21:30.932 treq: not required 00:21:30.932 portid: 0 00:21:30.932 trsvcid: 4420 00:21:30.932 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:30.932 traddr: 10.0.0.2 00:21:30.932 eflags: explicit discovery connections, duplicate discovery information 00:21:30.932 sectype: none 00:21:30.932 =====Discovery Log Entry 1====== 00:21:30.932 trtype: tcp 00:21:30.932 adrfam: ipv4 00:21:30.932 subtype: nvme subsystem 00:21:30.932 treq: not required 00:21:30.932 portid: 0 00:21:30.932 trsvcid: 4420 00:21:30.932 subnqn: nqn.2016-06.io.spdk:cnode1 00:21:30.932 traddr: 10.0.0.2 00:21:30.932 eflags: none 00:21:30.932 sectype: none 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:21:30.932 11:29:55 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:32.310 11:29:57 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:32.310 11:29:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:21:32.310 11:29:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:21:32.310 11:29:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:21:32.310 11:29:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:21:32.310 11:29:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:21:34.217 /dev/nvme0n1 ]] 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:21:34.217 11:29:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:34.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:34.476 11:29:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:34.476 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:21:34.476 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:21:34.476 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:34.476 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:21:34.476 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:34.476 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:34.477 rmmod nvme_tcp 00:21:34.477 rmmod nvme_fabrics 00:21:34.477 rmmod nvme_keyring 00:21:34.477 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3889654 ']' 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3889654 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 3889654 ']' 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 3889654 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3889654 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3889654' 00:21:34.737 killing process with pid 3889654 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 3889654 00:21:34.737 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 3889654 00:21:34.996 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:34.996 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:34.996 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:34.996 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:34.996 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:34.996 11:29:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.996 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.996 11:29:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.901 11:30:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:36.901 00:21:36.901 real 0m16.211s 00:21:36.901 user 0m22.344s 00:21:36.901 sys 0m7.544s 00:21:36.901 11:30:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:36.901 11:30:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:21:36.901 ************************************ 00:21:36.901 END TEST nvmf_nvme_cli 00:21:36.901 ************************************ 00:21:37.161 11:30:02 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:21:37.161 11:30:02 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:21:37.161 11:30:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:37.161 11:30:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:37.161 11:30:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:37.161 ************************************ 00:21:37.161 START TEST nvmf_vfio_user 00:21:37.161 ************************************ 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:21:37.161 * Looking for test storage... 00:21:37.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.161 11:30:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3891246 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3891246' 00:21:37.162 Process pid: 3891246 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3891246 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 3891246 ']' 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:37.162 11:30:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:21:37.162 [2024-06-10 11:30:02.258502] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:21:37.162 [2024-06-10 11:30:02.258560] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.422 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.422 [2024-06-10 11:30:02.378789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:37.422 [2024-06-10 11:30:02.464493] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.422 [2024-06-10 11:30:02.464541] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.422 [2024-06-10 11:30:02.464561] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.422 [2024-06-10 11:30:02.464581] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.422 [2024-06-10 11:30:02.464597] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.422 [2024-06-10 11:30:02.464668] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.422 [2024-06-10 11:30:02.464763] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.422 [2024-06-10 11:30:02.464876] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.422 [2024-06-10 11:30:02.464878] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.359 11:30:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:38.359 11:30:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:21:38.359 11:30:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:21:39.297 11:30:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:21:39.556 11:30:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:21:39.556 11:30:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:21:39.556 11:30:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:39.556 11:30:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:21:39.556 11:30:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:39.556 Malloc1 00:21:39.815 11:30:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:21:39.815 11:30:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:21:40.074 11:30:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:21:40.333 11:30:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:40.333 11:30:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:21:40.333 11:30:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:40.592 Malloc2 00:21:40.592 11:30:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:21:40.851 11:30:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:21:41.111 11:30:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:21:41.371 11:30:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:21:41.371 11:30:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:21:41.371 11:30:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:21:41.371 11:30:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:21:41.371 11:30:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:21:41.371 11:30:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:21:41.371 [2024-06-10 11:30:06.370512] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:21:41.371 [2024-06-10 11:30:06.370557] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892487 ] 00:21:41.371 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.371 [2024-06-10 11:30:06.407066] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:21:41.371 [2024-06-10 11:30:06.412473] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:21:41.371 [2024-06-10 11:30:06.412499] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7efd78564000 00:21:41.371 [2024-06-10 11:30:06.413469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:41.371 [2024-06-10 11:30:06.414469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:41.371 [2024-06-10 11:30:06.415475] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:41.371 [2024-06-10 11:30:06.416481] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:41.371 [2024-06-10 11:30:06.417490] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:41.371 [2024-06-10 11:30:06.418496] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:41.371 [2024-06-10 11:30:06.419507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:21:41.371 [2024-06-10 11:30:06.420509] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:21:41.371 [2024-06-10 11:30:06.421515] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:21:41.371 [2024-06-10 11:30:06.421533] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7efd78559000 00:21:41.371 [2024-06-10 11:30:06.422785] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:21:41.371 [2024-06-10 11:30:06.439189] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:21:41.371 [2024-06-10 11:30:06.439219] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:21:41.371 [2024-06-10 11:30:06.441668] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:21:41.371 [2024-06-10 11:30:06.441722] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:21:41.371 [2024-06-10 11:30:06.441815] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:21:41.371 [2024-06-10 11:30:06.441838] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:21:41.371 [2024-06-10 11:30:06.441848] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:21:41.371 [2024-06-10 11:30:06.446586] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:21:41.371 [2024-06-10 11:30:06.446602] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:21:41.372 [2024-06-10 11:30:06.446615] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:21:41.372 [2024-06-10 11:30:06.446680] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:21:41.372 [2024-06-10 11:30:06.446693] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:21:41.372 [2024-06-10 11:30:06.446705] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:21:41.372 [2024-06-10 11:30:06.447687] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:21:41.372 [2024-06-10 11:30:06.447700] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:41.372 [2024-06-10 11:30:06.448692] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:21:41.372 [2024-06-10 11:30:06.448705] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:21:41.372 [2024-06-10 11:30:06.448713] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:21:41.372 [2024-06-10 11:30:06.448725] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:41.372 [2024-06-10 11:30:06.448834] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:21:41.372 [2024-06-10 11:30:06.448843] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:41.372 [2024-06-10 11:30:06.448852] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:21:41.372 [2024-06-10 11:30:06.449699] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:21:41.372 [2024-06-10 11:30:06.450706] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:21:41.372 [2024-06-10 11:30:06.451714] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:21:41.372 [2024-06-10 11:30:06.452710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:41.372 [2024-06-10 11:30:06.452815] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:41.372 [2024-06-10 11:30:06.453727] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:21:41.372 [2024-06-10 11:30:06.453750] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:41.372 [2024-06-10 11:30:06.453760] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.453787] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:21:41.372 [2024-06-10 11:30:06.453801] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.453824] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:41.372 [2024-06-10 11:30:06.453833] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:41.372 [2024-06-10 11:30:06.453851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:41.372 [2024-06-10 11:30:06.453911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:21:41.372 [2024-06-10 11:30:06.453926] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:21:41.372 [2024-06-10 11:30:06.453934] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:21:41.372 [2024-06-10 11:30:06.453942] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:21:41.372 [2024-06-10 11:30:06.453951] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:21:41.372 [2024-06-10 11:30:06.453960] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:21:41.372 [2024-06-10 11:30:06.453968] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:21:41.372 [2024-06-10 11:30:06.453976] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454082] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:21:41.372 [2024-06-10 11:30:06.454123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:21:41.372 [2024-06-10 11:30:06.454137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.372 [2024-06-10 11:30:06.454150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.372 [2024-06-10 11:30:06.454162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.372 [2024-06-10 11:30:06.454174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.372 [2024-06-10 11:30:06.454182] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454197] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:21:41.372 [2024-06-10 11:30:06.454224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:21:41.372 [2024-06-10 11:30:06.454234] nvme_ctrlr.c:2945:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:21:41.372 [2024-06-10 11:30:06.454243] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454254] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454266] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:41.372 [2024-06-10 11:30:06.454296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:21:41.372 [2024-06-10 11:30:06.454354] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454367] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454379] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:21:41.372 [2024-06-10 11:30:06.454387] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:21:41.372 [2024-06-10 11:30:06.454397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:21:41.372 [2024-06-10 11:30:06.454410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:21:41.372 [2024-06-10 11:30:06.454427] nvme_ctrlr.c:4612:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:21:41.372 [2024-06-10 11:30:06.454441] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454453] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454463] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:41.372 [2024-06-10 11:30:06.454472] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:41.372 [2024-06-10 11:30:06.454481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:41.372 [2024-06-10 11:30:06.454507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:21:41.372 [2024-06-10 11:30:06.454522] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454534] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454545] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:21:41.372 [2024-06-10 11:30:06.454553] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:41.372 [2024-06-10 11:30:06.454562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:41.372 [2024-06-10 11:30:06.454587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:21:41.372 [2024-06-10 11:30:06.454601] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454612] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454624] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454634] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454643] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:21:41.372 [2024-06-10 11:30:06.454651] nvme_ctrlr.c:3045:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:21:41.373 [2024-06-10 11:30:06.454659] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:21:41.373 [2024-06-10 11:30:06.454668] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:21:41.373 [2024-06-10 11:30:06.454695] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:21:41.373 [2024-06-10 11:30:06.454709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:21:41.373 [2024-06-10 11:30:06.454727] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:21:41.373 [2024-06-10 11:30:06.454738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:21:41.373 [2024-06-10 11:30:06.454756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:21:41.373 [2024-06-10 11:30:06.454775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:21:41.373 [2024-06-10 11:30:06.454792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:41.373 [2024-06-10 11:30:06.454803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:21:41.373 [2024-06-10 11:30:06.454820] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:21:41.373 [2024-06-10 11:30:06.454828] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:21:41.373 [2024-06-10 11:30:06.454834] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:21:41.373 [2024-06-10 11:30:06.454841] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:21:41.373 [2024-06-10 11:30:06.454850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:21:41.373 [2024-06-10 11:30:06.454861] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:21:41.373 [2024-06-10 11:30:06.454869] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:21:41.373 [2024-06-10 11:30:06.454878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:21:41.373 [2024-06-10 11:30:06.454889] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:21:41.373 [2024-06-10 11:30:06.454897] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:21:41.373 [2024-06-10 11:30:06.454908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:21:41.373 [2024-06-10 11:30:06.454923] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:21:41.373 [2024-06-10 11:30:06.454931] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:21:41.373 [2024-06-10 11:30:06.454940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:21:41.373 [2024-06-10 11:30:06.454951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:21:41.373 [2024-06-10 11:30:06.454970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:21:41.373 [2024-06-10 11:30:06.454987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:21:41.373 [2024-06-10 11:30:06.455003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:21:41.373 ===================================================== 00:21:41.373 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:41.373 ===================================================== 00:21:41.373 Controller Capabilities/Features 00:21:41.373 ================================ 00:21:41.373 Vendor ID: 4e58 00:21:41.373 Subsystem Vendor ID: 4e58 00:21:41.373 Serial Number: SPDK1 00:21:41.373 Model Number: SPDK bdev Controller 00:21:41.373 Firmware Version: 24.09 00:21:41.373 Recommended Arb Burst: 6 00:21:41.373 IEEE OUI Identifier: 8d 6b 50 00:21:41.373 Multi-path I/O 00:21:41.373 May have multiple subsystem ports: Yes 00:21:41.373 May have multiple controllers: Yes 00:21:41.373 Associated with SR-IOV VF: No 00:21:41.373 Max Data Transfer Size: 131072 00:21:41.373 Max Number of Namespaces: 32 00:21:41.373 Max Number of I/O Queues: 127 00:21:41.373 NVMe Specification Version (VS): 1.3 00:21:41.373 NVMe Specification Version (Identify): 1.3 00:21:41.373 Maximum Queue Entries: 256 00:21:41.373 Contiguous Queues Required: Yes 00:21:41.373 Arbitration Mechanisms Supported 00:21:41.373 Weighted Round Robin: Not Supported 00:21:41.373 Vendor Specific: Not Supported 00:21:41.373 Reset Timeout: 15000 ms 00:21:41.373 Doorbell Stride: 4 bytes 00:21:41.373 NVM Subsystem Reset: Not Supported 00:21:41.373 Command Sets Supported 00:21:41.373 NVM Command Set: Supported 00:21:41.373 Boot Partition: Not Supported 00:21:41.373 Memory Page Size Minimum: 4096 bytes 00:21:41.373 Memory Page Size Maximum: 4096 bytes 00:21:41.373 Persistent Memory Region: Not Supported 00:21:41.373 Optional Asynchronous Events Supported 00:21:41.373 Namespace Attribute Notices: Supported 00:21:41.373 Firmware Activation Notices: Not Supported 00:21:41.373 ANA Change Notices: Not Supported 00:21:41.373 PLE Aggregate Log Change Notices: Not Supported 00:21:41.373 LBA Status Info Alert Notices: Not Supported 00:21:41.373 EGE Aggregate Log Change Notices: Not Supported 00:21:41.373 Normal NVM Subsystem Shutdown event: Not Supported 00:21:41.373 Zone Descriptor Change Notices: Not Supported 00:21:41.373 Discovery Log Change Notices: Not Supported 00:21:41.373 Controller Attributes 00:21:41.373 128-bit Host Identifier: Supported 00:21:41.373 Non-Operational Permissive Mode: Not Supported 00:21:41.373 NVM Sets: Not Supported 00:21:41.373 Read Recovery Levels: Not Supported 00:21:41.373 Endurance Groups: Not Supported 00:21:41.373 Predictable Latency Mode: Not Supported 00:21:41.373 Traffic Based Keep ALive: Not Supported 00:21:41.373 Namespace Granularity: Not Supported 00:21:41.373 SQ Associations: Not Supported 00:21:41.373 UUID List: Not Supported 00:21:41.373 Multi-Domain Subsystem: Not Supported 00:21:41.373 Fixed Capacity Management: Not Supported 00:21:41.373 Variable Capacity Management: Not Supported 00:21:41.373 Delete Endurance Group: Not Supported 00:21:41.373 Delete NVM Set: Not Supported 00:21:41.373 Extended LBA Formats Supported: Not Supported 00:21:41.373 Flexible Data Placement Supported: Not Supported 00:21:41.373 00:21:41.373 Controller Memory Buffer Support 00:21:41.373 ================================ 00:21:41.373 Supported: No 00:21:41.373 00:21:41.373 Persistent Memory Region Support 00:21:41.373 ================================ 00:21:41.373 Supported: No 00:21:41.373 00:21:41.373 Admin Command Set Attributes 00:21:41.373 ============================ 00:21:41.373 Security Send/Receive: Not Supported 00:21:41.373 Format NVM: Not Supported 00:21:41.373 Firmware Activate/Download: Not Supported 00:21:41.373 Namespace Management: Not Supported 00:21:41.373 Device Self-Test: Not Supported 00:21:41.373 Directives: Not Supported 00:21:41.373 NVMe-MI: Not Supported 00:21:41.373 Virtualization Management: Not Supported 00:21:41.373 Doorbell Buffer Config: Not Supported 00:21:41.373 Get LBA Status Capability: Not Supported 00:21:41.373 Command & Feature Lockdown Capability: Not Supported 00:21:41.373 Abort Command Limit: 4 00:21:41.373 Async Event Request Limit: 4 00:21:41.373 Number of Firmware Slots: N/A 00:21:41.373 Firmware Slot 1 Read-Only: N/A 00:21:41.373 Firmware Activation Without Reset: N/A 00:21:41.373 Multiple Update Detection Support: N/A 00:21:41.373 Firmware Update Granularity: No Information Provided 00:21:41.373 Per-Namespace SMART Log: No 00:21:41.373 Asymmetric Namespace Access Log Page: Not Supported 00:21:41.373 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:21:41.373 Command Effects Log Page: Supported 00:21:41.373 Get Log Page Extended Data: Supported 00:21:41.373 Telemetry Log Pages: Not Supported 00:21:41.373 Persistent Event Log Pages: Not Supported 00:21:41.373 Supported Log Pages Log Page: May Support 00:21:41.373 Commands Supported & Effects Log Page: Not Supported 00:21:41.373 Feature Identifiers & Effects Log Page:May Support 00:21:41.373 NVMe-MI Commands & Effects Log Page: May Support 00:21:41.373 Data Area 4 for Telemetry Log: Not Supported 00:21:41.373 Error Log Page Entries Supported: 128 00:21:41.373 Keep Alive: Supported 00:21:41.373 Keep Alive Granularity: 10000 ms 00:21:41.373 00:21:41.373 NVM Command Set Attributes 00:21:41.373 ========================== 00:21:41.373 Submission Queue Entry Size 00:21:41.373 Max: 64 00:21:41.373 Min: 64 00:21:41.373 Completion Queue Entry Size 00:21:41.373 Max: 16 00:21:41.373 Min: 16 00:21:41.373 Number of Namespaces: 32 00:21:41.373 Compare Command: Supported 00:21:41.373 Write Uncorrectable Command: Not Supported 00:21:41.373 Dataset Management Command: Supported 00:21:41.373 Write Zeroes Command: Supported 00:21:41.373 Set Features Save Field: Not Supported 00:21:41.373 Reservations: Not Supported 00:21:41.373 Timestamp: Not Supported 00:21:41.373 Copy: Supported 00:21:41.373 Volatile Write Cache: Present 00:21:41.373 Atomic Write Unit (Normal): 1 00:21:41.373 Atomic Write Unit (PFail): 1 00:21:41.374 Atomic Compare & Write Unit: 1 00:21:41.374 Fused Compare & Write: Supported 00:21:41.374 Scatter-Gather List 00:21:41.374 SGL Command Set: Supported (Dword aligned) 00:21:41.374 SGL Keyed: Not Supported 00:21:41.374 SGL Bit Bucket Descriptor: Not Supported 00:21:41.374 SGL Metadata Pointer: Not Supported 00:21:41.374 Oversized SGL: Not Supported 00:21:41.374 SGL Metadata Address: Not Supported 00:21:41.374 SGL Offset: Not Supported 00:21:41.374 Transport SGL Data Block: Not Supported 00:21:41.374 Replay Protected Memory Block: Not Supported 00:21:41.374 00:21:41.374 Firmware Slot Information 00:21:41.374 ========================= 00:21:41.374 Active slot: 1 00:21:41.374 Slot 1 Firmware Revision: 24.09 00:21:41.374 00:21:41.374 00:21:41.374 Commands Supported and Effects 00:21:41.374 ============================== 00:21:41.374 Admin Commands 00:21:41.374 -------------- 00:21:41.374 Get Log Page (02h): Supported 00:21:41.374 Identify (06h): Supported 00:21:41.374 Abort (08h): Supported 00:21:41.374 Set Features (09h): Supported 00:21:41.374 Get Features (0Ah): Supported 00:21:41.374 Asynchronous Event Request (0Ch): Supported 00:21:41.374 Keep Alive (18h): Supported 00:21:41.374 I/O Commands 00:21:41.374 ------------ 00:21:41.374 Flush (00h): Supported LBA-Change 00:21:41.374 Write (01h): Supported LBA-Change 00:21:41.374 Read (02h): Supported 00:21:41.374 Compare (05h): Supported 00:21:41.374 Write Zeroes (08h): Supported LBA-Change 00:21:41.374 Dataset Management (09h): Supported LBA-Change 00:21:41.374 Copy (19h): Supported LBA-Change 00:21:41.374 Unknown (79h): Supported LBA-Change 00:21:41.374 Unknown (7Ah): Supported 00:21:41.374 00:21:41.374 Error Log 00:21:41.374 ========= 00:21:41.374 00:21:41.374 Arbitration 00:21:41.374 =========== 00:21:41.374 Arbitration Burst: 1 00:21:41.374 00:21:41.374 Power Management 00:21:41.374 ================ 00:21:41.374 Number of Power States: 1 00:21:41.374 Current Power State: Power State #0 00:21:41.374 Power State #0: 00:21:41.374 Max Power: 0.00 W 00:21:41.374 Non-Operational State: Operational 00:21:41.374 Entry Latency: Not Reported 00:21:41.374 Exit Latency: Not Reported 00:21:41.374 Relative Read Throughput: 0 00:21:41.374 Relative Read Latency: 0 00:21:41.374 Relative Write Throughput: 0 00:21:41.374 Relative Write Latency: 0 00:21:41.374 Idle Power: Not Reported 00:21:41.374 Active Power: Not Reported 00:21:41.374 Non-Operational Permissive Mode: Not Supported 00:21:41.374 00:21:41.374 Health Information 00:21:41.374 ================== 00:21:41.374 Critical Warnings: 00:21:41.374 Available Spare Space: OK 00:21:41.374 Temperature: OK 00:21:41.374 Device Reliability: OK 00:21:41.374 Read Only: No 00:21:41.374 Volatile Memory Backup: OK 00:21:41.374 Current Temperature: 0 Kelvin (-2[2024-06-10 11:30:06.455119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:21:41.374 [2024-06-10 11:30:06.455132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:21:41.374 [2024-06-10 11:30:06.455167] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:21:41.374 [2024-06-10 11:30:06.455181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.374 [2024-06-10 11:30:06.455192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.374 [2024-06-10 11:30:06.455203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.374 [2024-06-10 11:30:06.455214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.374 [2024-06-10 11:30:06.455731] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:21:41.374 [2024-06-10 11:30:06.455748] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:21:41.374 [2024-06-10 11:30:06.456734] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:41.374 [2024-06-10 11:30:06.456803] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:21:41.374 [2024-06-10 11:30:06.456817] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:21:41.374 [2024-06-10 11:30:06.457743] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:21:41.374 [2024-06-10 11:30:06.457760] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:21:41.374 [2024-06-10 11:30:06.457820] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:21:41.374 [2024-06-10 11:30:06.459784] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:21:41.634 73 Celsius) 00:21:41.634 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:41.634 Available Spare: 0% 00:21:41.634 Available Spare Threshold: 0% 00:21:41.634 Life Percentage Used: 0% 00:21:41.634 Data Units Read: 0 00:21:41.634 Data Units Written: 0 00:21:41.634 Host Read Commands: 0 00:21:41.634 Host Write Commands: 0 00:21:41.634 Controller Busy Time: 0 minutes 00:21:41.634 Power Cycles: 0 00:21:41.634 Power On Hours: 0 hours 00:21:41.634 Unsafe Shutdowns: 0 00:21:41.634 Unrecoverable Media Errors: 0 00:21:41.634 Lifetime Error Log Entries: 0 00:21:41.634 Warning Temperature Time: 0 minutes 00:21:41.634 Critical Temperature Time: 0 minutes 00:21:41.634 00:21:41.634 Number of Queues 00:21:41.634 ================ 00:21:41.634 Number of I/O Submission Queues: 127 00:21:41.634 Number of I/O Completion Queues: 127 00:21:41.634 00:21:41.634 Active Namespaces 00:21:41.634 ================= 00:21:41.634 Namespace ID:1 00:21:41.634 Error Recovery Timeout: Unlimited 00:21:41.634 Command Set Identifier: NVM (00h) 00:21:41.634 Deallocate: Supported 00:21:41.634 Deallocated/Unwritten Error: Not Supported 00:21:41.634 Deallocated Read Value: Unknown 00:21:41.634 Deallocate in Write Zeroes: Not Supported 00:21:41.634 Deallocated Guard Field: 0xFFFF 00:21:41.634 Flush: Supported 00:21:41.634 Reservation: Supported 00:21:41.634 Namespace Sharing Capabilities: Multiple Controllers 00:21:41.634 Size (in LBAs): 131072 (0GiB) 00:21:41.634 Capacity (in LBAs): 131072 (0GiB) 00:21:41.634 Utilization (in LBAs): 131072 (0GiB) 00:21:41.634 NGUID: 1C5C917E11F24EE3AA183E310CFDBCB8 00:21:41.634 UUID: 1c5c917e-11f2-4ee3-aa18-3e310cfdbcb8 00:21:41.634 Thin Provisioning: Not Supported 00:21:41.634 Per-NS Atomic Units: Yes 00:21:41.634 Atomic Boundary Size (Normal): 0 00:21:41.634 Atomic Boundary Size (PFail): 0 00:21:41.634 Atomic Boundary Offset: 0 00:21:41.634 Maximum Single Source Range Length: 65535 00:21:41.634 Maximum Copy Length: 65535 00:21:41.634 Maximum Source Range Count: 1 00:21:41.634 NGUID/EUI64 Never Reused: No 00:21:41.634 Namespace Write Protected: No 00:21:41.634 Number of LBA Formats: 1 00:21:41.634 Current LBA Format: LBA Format #00 00:21:41.634 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:41.634 00:21:41.634 11:30:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:21:41.634 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.634 [2024-06-10 11:30:06.702454] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:46.918 Initializing NVMe Controllers 00:21:46.918 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:46.918 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:21:46.918 Initialization complete. Launching workers. 00:21:46.918 ======================================================== 00:21:46.918 Latency(us) 00:21:46.918 Device Information : IOPS MiB/s Average min max 00:21:46.918 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32085.39 125.33 3989.34 1253.51 8209.63 00:21:46.918 ======================================================== 00:21:46.918 Total : 32085.39 125.33 3989.34 1253.51 8209.63 00:21:46.918 00:21:46.918 [2024-06-10 11:30:11.724669] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:46.918 11:30:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:21:46.918 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.918 [2024-06-10 11:30:11.976961] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:52.206 Initializing NVMe Controllers 00:21:52.206 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:52.206 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:21:52.206 Initialization complete. Launching workers. 00:21:52.206 ======================================================== 00:21:52.206 Latency(us) 00:21:52.206 Device Information : IOPS MiB/s Average min max 00:21:52.206 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.60 62.70 7979.89 6977.97 8980.70 00:21:52.206 ======================================================== 00:21:52.206 Total : 16050.60 62.70 7979.89 6977.97 8980.70 00:21:52.206 00:21:52.206 [2024-06-10 11:30:17.018077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:52.206 11:30:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:21:52.206 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.465 [2024-06-10 11:30:17.331505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:21:57.738 [2024-06-10 11:30:22.407892] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:21:57.738 Initializing NVMe Controllers 00:21:57.738 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:57.738 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:21:57.738 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:21:57.738 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:21:57.738 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:21:57.738 Initialization complete. Launching workers. 00:21:57.738 Starting thread on core 2 00:21:57.738 Starting thread on core 3 00:21:57.738 Starting thread on core 1 00:21:57.738 11:30:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:21:57.738 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.738 [2024-06-10 11:30:22.793058] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:22:01.031 [2024-06-10 11:30:25.874790] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:22:01.031 Initializing NVMe Controllers 00:22:01.031 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:22:01.031 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:22:01.031 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:22:01.031 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:22:01.031 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:22:01.031 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:22:01.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:22:01.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:22:01.031 Initialization complete. Launching workers. 00:22:01.031 Starting thread on core 1 with urgent priority queue 00:22:01.031 Starting thread on core 2 with urgent priority queue 00:22:01.031 Starting thread on core 3 with urgent priority queue 00:22:01.031 Starting thread on core 0 with urgent priority queue 00:22:01.031 SPDK bdev Controller (SPDK1 ) core 0: 8489.33 IO/s 11.78 secs/100000 ios 00:22:01.031 SPDK bdev Controller (SPDK1 ) core 1: 9422.67 IO/s 10.61 secs/100000 ios 00:22:01.031 SPDK bdev Controller (SPDK1 ) core 2: 8250.00 IO/s 12.12 secs/100000 ios 00:22:01.031 SPDK bdev Controller (SPDK1 ) core 3: 7132.00 IO/s 14.02 secs/100000 ios 00:22:01.031 ======================================================== 00:22:01.031 00:22:01.031 11:30:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:22:01.031 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.291 [2024-06-10 11:30:26.245080] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:22:01.291 Initializing NVMe Controllers 00:22:01.291 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:22:01.291 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:22:01.291 Namespace ID: 1 size: 0GB 00:22:01.291 Initialization complete. 00:22:01.291 INFO: using host memory buffer for IO 00:22:01.291 Hello world! 00:22:01.291 [2024-06-10 11:30:26.279762] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:22:01.292 11:30:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:22:01.624 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.624 [2024-06-10 11:30:26.647157] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:22:02.562 Initializing NVMe Controllers 00:22:02.562 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:22:02.562 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:22:02.562 Initialization complete. Launching workers. 00:22:02.562 submit (in ns) avg, min, max = 10232.7, 4039.2, 4002317.6 00:22:02.562 complete (in ns) avg, min, max = 18478.0, 2390.4, 4033502.4 00:22:02.562 00:22:02.562 Submit histogram 00:22:02.562 ================ 00:22:02.562 Range in us Cumulative Count 00:22:02.562 4.019 - 4.045: 0.0119% ( 2) 00:22:02.562 4.045 - 4.070: 0.4897% ( 80) 00:22:02.562 4.070 - 4.096: 3.5294% ( 509) 00:22:02.562 4.096 - 4.122: 10.0030% ( 1084) 00:22:02.562 4.122 - 4.147: 20.9734% ( 1837) 00:22:02.562 4.147 - 4.173: 30.0508% ( 1520) 00:22:02.562 4.173 - 4.198: 35.8376% ( 969) 00:22:02.562 4.198 - 4.224: 41.2840% ( 912) 00:22:02.562 4.224 - 4.250: 47.5306% ( 1046) 00:22:02.562 4.250 - 4.275: 61.2004% ( 2289) 00:22:02.562 4.275 - 4.301: 73.6817% ( 2090) 00:22:02.562 4.301 - 4.326: 81.3019% ( 1276) 00:22:02.562 4.326 - 4.352: 85.3031% ( 670) 00:22:02.562 4.352 - 4.378: 86.9274% ( 272) 00:22:02.562 4.378 - 4.403: 87.7337% ( 135) 00:22:02.562 4.403 - 4.429: 88.8385% ( 185) 00:22:02.562 4.429 - 4.454: 90.2956% ( 244) 00:22:02.562 4.454 - 4.480: 91.5855% ( 216) 00:22:02.562 4.480 - 4.506: 92.6904% ( 185) 00:22:02.562 4.506 - 4.531: 94.3983% ( 286) 00:22:02.562 4.531 - 4.557: 96.3273% ( 323) 00:22:02.562 4.557 - 4.582: 97.4022% ( 180) 00:22:02.562 4.582 - 4.608: 98.2024% ( 134) 00:22:02.562 4.608 - 4.634: 98.7698% ( 95) 00:22:02.562 4.634 - 4.659: 99.1042% ( 56) 00:22:02.562 4.659 - 4.685: 99.3073% ( 34) 00:22:02.562 4.685 - 4.710: 99.4088% ( 17) 00:22:02.562 4.710 - 4.736: 99.4327% ( 4) 00:22:02.562 4.736 - 4.762: 99.4446% ( 2) 00:22:02.562 4.762 - 4.787: 99.4625% ( 3) 00:22:02.562 4.787 - 4.813: 99.4924% ( 5) 00:22:02.562 4.813 - 4.838: 99.5043% ( 2) 00:22:02.562 4.838 - 4.864: 99.5163% ( 2) 00:22:02.562 4.864 - 4.890: 99.5282% ( 2) 00:22:02.562 4.941 - 4.966: 99.5402% ( 2) 00:22:02.562 5.018 - 5.043: 99.5461% ( 1) 00:22:02.562 5.043 - 5.069: 99.5581% ( 2) 00:22:02.562 5.197 - 5.222: 99.5640% ( 1) 00:22:02.562 5.248 - 5.274: 99.5700% ( 1) 00:22:02.562 5.299 - 5.325: 99.5760% ( 1) 00:22:02.562 5.350 - 5.376: 99.5879% ( 2) 00:22:02.562 5.376 - 5.402: 99.5939% ( 1) 00:22:02.562 5.453 - 5.478: 99.6059% ( 2) 00:22:02.562 5.504 - 5.530: 99.6118% ( 1) 00:22:02.562 5.606 - 5.632: 99.6178% ( 1) 00:22:02.562 5.658 - 5.683: 99.6238% ( 1) 00:22:02.562 6.554 - 6.605: 99.6297% ( 1) 00:22:02.562 6.656 - 6.707: 99.6357% ( 1) 00:22:02.562 6.707 - 6.758: 99.6417% ( 1) 00:22:02.562 6.810 - 6.861: 99.6477% ( 1) 00:22:02.562 6.912 - 6.963: 99.6536% ( 1) 00:22:02.562 6.963 - 7.014: 99.6596% ( 1) 00:22:02.562 7.014 - 7.066: 99.6715% ( 2) 00:22:02.562 7.117 - 7.168: 99.6895% ( 3) 00:22:02.562 7.270 - 7.322: 99.7014% ( 2) 00:22:02.562 7.322 - 7.373: 99.7133% ( 2) 00:22:02.562 7.424 - 7.475: 99.7313% ( 3) 00:22:02.562 7.526 - 7.578: 99.7432% ( 2) 00:22:02.562 7.629 - 7.680: 99.7492% ( 1) 00:22:02.562 7.680 - 7.731: 99.7552% ( 1) 00:22:02.562 7.731 - 7.782: 99.7611% ( 1) 00:22:02.562 7.885 - 7.936: 99.7671% ( 1) 00:22:02.562 7.936 - 7.987: 99.7790% ( 2) 00:22:02.562 7.987 - 8.038: 99.7850% ( 1) 00:22:02.562 8.038 - 8.090: 99.7970% ( 2) 00:22:02.562 8.090 - 8.141: 99.8029% ( 1) 00:22:02.562 8.294 - 8.346: 99.8089% ( 1) 00:22:02.562 8.448 - 8.499: 99.8149% ( 1) 00:22:02.562 10.854 - 10.906: 99.8208% ( 1) 00:22:02.562 10.906 - 10.957: 99.8268% ( 1) 00:22:02.562 12.851 - 12.902: 99.8328% ( 1) 00:22:02.562 13.722 - 13.824: 99.8388% ( 1) 00:22:02.562 15.667 - 15.770: 99.8447% ( 1) 00:22:02.562 19.558 - 19.661: 99.8507% ( 1) 00:22:02.562 3984.589 - 4010.803: 100.0000% ( 25) 00:22:02.562 00:22:02.562 Complete histogram 00:22:02.562 ================== 00:22:02.562 Range in us Cumulative Count 00:22:02.562 2.381 - 2.394: 0.0060% ( 1) 00:22:02.562 2.394 - 2.406: 0.2867% ( 47) 00:22:02.822 2.406 - [2024-06-10 11:30:27.669677] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:22:02.822 2.419: 5.4524% ( 865) 00:22:02.822 2.419 - 2.432: 17.5754% ( 2030) 00:22:02.822 2.432 - 2.445: 22.6754% ( 854) 00:22:02.822 2.445 - 2.458: 24.9925% ( 388) 00:22:02.822 2.458 - 2.470: 33.7713% ( 1470) 00:22:02.822 2.470 - 2.483: 58.1845% ( 4088) 00:22:02.822 2.483 - 2.496: 78.4174% ( 3388) 00:22:02.822 2.496 - 2.509: 84.2102% ( 970) 00:22:02.822 2.509 - 2.522: 89.1251% ( 823) 00:22:02.822 2.522 - 2.534: 93.1502% ( 674) 00:22:02.822 2.534 - 2.547: 94.7268% ( 264) 00:22:02.822 2.547 - 2.560: 95.8913% ( 195) 00:22:02.822 2.560 - 2.573: 96.8946% ( 168) 00:22:02.822 2.573 - 2.586: 97.8262% ( 156) 00:22:02.822 2.586 - 2.598: 98.5190% ( 116) 00:22:02.822 2.598 - 2.611: 98.8713% ( 59) 00:22:02.822 2.611 - 2.624: 99.0206% ( 25) 00:22:02.822 2.624 - 2.637: 99.0564% ( 6) 00:22:02.822 2.637 - 2.650: 99.0923% ( 6) 00:22:02.822 2.650 - 2.662: 99.1102% ( 3) 00:22:02.822 2.662 - 2.675: 99.1281% ( 3) 00:22:02.822 2.675 - 2.688: 99.1400% ( 2) 00:22:02.822 2.688 - 2.701: 99.1460% ( 1) 00:22:02.822 2.714 - 2.726: 99.1520% ( 1) 00:22:02.822 2.739 - 2.752: 99.1580% ( 1) 00:22:02.822 2.752 - 2.765: 99.1639% ( 1) 00:22:02.822 2.803 - 2.816: 99.1759% ( 2) 00:22:02.822 2.829 - 2.842: 99.1818% ( 1) 00:22:02.822 2.842 - 2.854: 99.1878% ( 1) 00:22:02.822 2.893 - 2.906: 99.1938% ( 1) 00:22:02.822 2.906 - 2.918: 99.1998% ( 1) 00:22:02.822 2.918 - 2.931: 99.2057% ( 1) 00:22:02.822 2.931 - 2.944: 99.2117% ( 1) 00:22:02.822 2.944 - 2.957: 99.2177% ( 1) 00:22:02.822 2.957 - 2.970: 99.2236% ( 1) 00:22:02.822 2.982 - 2.995: 99.2296% ( 1) 00:22:02.822 3.021 - 3.034: 99.2356% ( 1) 00:22:02.822 3.034 - 3.046: 99.2416% ( 1) 00:22:02.822 3.059 - 3.072: 99.2595% ( 3) 00:22:02.822 3.085 - 3.098: 99.2714% ( 2) 00:22:02.822 3.123 - 3.136: 99.2774% ( 1) 00:22:02.822 3.149 - 3.162: 99.2834% ( 1) 00:22:02.822 3.174 - 3.187: 99.2953% ( 2) 00:22:02.822 3.213 - 3.226: 99.3132% ( 3) 00:22:02.822 3.264 - 3.277: 99.3192% ( 1) 00:22:02.822 3.379 - 3.405: 99.3252% ( 1) 00:22:02.822 4.864 - 4.890: 99.3311% ( 1) 00:22:02.822 5.069 - 5.094: 99.3371% ( 1) 00:22:02.822 5.094 - 5.120: 99.3431% ( 1) 00:22:02.822 5.222 - 5.248: 99.3491% ( 1) 00:22:02.822 5.248 - 5.274: 99.3610% ( 2) 00:22:02.822 5.274 - 5.299: 99.3670% ( 1) 00:22:02.822 5.402 - 5.427: 99.3729% ( 1) 00:22:02.822 5.478 - 5.504: 99.3849% ( 2) 00:22:02.822 5.530 - 5.555: 99.3909% ( 1) 00:22:02.822 5.606 - 5.632: 99.3968% ( 1) 00:22:02.822 5.760 - 5.786: 99.4088% ( 2) 00:22:02.822 5.786 - 5.811: 99.4148% ( 1) 00:22:02.822 5.811 - 5.837: 99.4267% ( 2) 00:22:02.822 5.837 - 5.862: 99.4327% ( 1) 00:22:02.822 5.888 - 5.914: 99.4386% ( 1) 00:22:02.822 5.965 - 5.990: 99.4506% ( 2) 00:22:02.822 5.990 - 6.016: 99.4566% ( 1) 00:22:02.822 6.016 - 6.042: 99.4625% ( 1) 00:22:02.822 6.067 - 6.093: 99.4745% ( 2) 00:22:02.822 6.093 - 6.118: 99.4864% ( 2) 00:22:02.822 6.118 - 6.144: 99.4924% ( 1) 00:22:02.822 6.170 - 6.195: 99.4984% ( 1) 00:22:02.822 6.323 - 6.349: 99.5043% ( 1) 00:22:02.822 6.426 - 6.451: 99.5103% ( 1) 00:22:02.822 6.477 - 6.502: 99.5163% ( 1) 00:22:02.822 6.528 - 6.554: 99.5222% ( 1) 00:22:02.822 6.554 - 6.605: 99.5282% ( 1) 00:22:02.822 6.605 - 6.656: 99.5342% ( 1) 00:22:02.822 6.707 - 6.758: 99.5461% ( 2) 00:22:02.822 6.963 - 7.014: 99.5521% ( 1) 00:22:02.822 7.014 - 7.066: 99.5581% ( 1) 00:22:02.822 7.219 - 7.270: 99.5640% ( 1) 00:22:02.822 7.270 - 7.322: 99.5700% ( 1) 00:22:02.822 7.629 - 7.680: 99.5760% ( 1) 00:22:02.822 7.680 - 7.731: 99.5820% ( 1) 00:22:02.822 8.141 - 8.192: 99.5879% ( 1) 00:22:02.822 9.626 - 9.677: 99.5939% ( 1) 00:22:02.822 12.339 - 12.390: 99.5999% ( 1) 00:22:02.822 3984.589 - 4010.803: 99.9881% ( 65) 00:22:02.822 4010.803 - 4037.018: 100.0000% ( 2) 00:22:02.822 00:22:02.822 11:30:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:22:02.822 11:30:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:22:02.822 11:30:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:22:02.822 11:30:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:22:02.822 11:30:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:22:03.081 [ 00:22:03.081 { 00:22:03.081 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:03.081 "subtype": "Discovery", 00:22:03.081 "listen_addresses": [], 00:22:03.081 "allow_any_host": true, 00:22:03.081 "hosts": [] 00:22:03.081 }, 00:22:03.081 { 00:22:03.081 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:22:03.081 "subtype": "NVMe", 00:22:03.081 "listen_addresses": [ 00:22:03.081 { 00:22:03.081 "trtype": "VFIOUSER", 00:22:03.081 "adrfam": "IPv4", 00:22:03.081 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:22:03.081 "trsvcid": "0" 00:22:03.081 } 00:22:03.081 ], 00:22:03.081 "allow_any_host": true, 00:22:03.081 "hosts": [], 00:22:03.081 "serial_number": "SPDK1", 00:22:03.081 "model_number": "SPDK bdev Controller", 00:22:03.081 "max_namespaces": 32, 00:22:03.081 "min_cntlid": 1, 00:22:03.081 "max_cntlid": 65519, 00:22:03.081 "namespaces": [ 00:22:03.081 { 00:22:03.081 "nsid": 1, 00:22:03.081 "bdev_name": "Malloc1", 00:22:03.081 "name": "Malloc1", 00:22:03.081 "nguid": "1C5C917E11F24EE3AA183E310CFDBCB8", 00:22:03.081 "uuid": "1c5c917e-11f2-4ee3-aa18-3e310cfdbcb8" 00:22:03.081 } 00:22:03.081 ] 00:22:03.081 }, 00:22:03.081 { 00:22:03.081 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:22:03.081 "subtype": "NVMe", 00:22:03.081 "listen_addresses": [ 00:22:03.081 { 00:22:03.081 "trtype": "VFIOUSER", 00:22:03.081 "adrfam": "IPv4", 00:22:03.081 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:22:03.081 "trsvcid": "0" 00:22:03.081 } 00:22:03.081 ], 00:22:03.081 "allow_any_host": true, 00:22:03.081 "hosts": [], 00:22:03.081 "serial_number": "SPDK2", 00:22:03.081 "model_number": "SPDK bdev Controller", 00:22:03.081 "max_namespaces": 32, 00:22:03.081 "min_cntlid": 1, 00:22:03.081 "max_cntlid": 65519, 00:22:03.081 "namespaces": [ 00:22:03.081 { 00:22:03.081 "nsid": 1, 00:22:03.081 "bdev_name": "Malloc2", 00:22:03.081 "name": "Malloc2", 00:22:03.081 "nguid": "AF2BB7FDCA87451DAC8AAFEAA5A774A8", 00:22:03.081 "uuid": "af2bb7fd-ca87-451d-ac8a-afeaa5a774a8" 00:22:03.081 } 00:22:03.081 ] 00:22:03.081 } 00:22:03.081 ] 00:22:03.081 11:30:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:03.081 11:30:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3896115 00:22:03.081 11:30:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:22:03.081 11:30:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:22:03.081 11:30:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:22:03.081 11:30:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:03.081 11:30:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:03.081 11:30:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:22:03.081 11:30:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:22:03.081 11:30:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:22:03.082 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.340 [2024-06-10 11:30:28.201096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:22:03.340 Malloc3 00:22:03.340 11:30:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:22:03.340 [2024-06-10 11:30:28.439040] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:22:03.599 11:30:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:22:03.599 Asynchronous Event Request test 00:22:03.599 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:22:03.599 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:22:03.599 Registering asynchronous event callbacks... 00:22:03.599 Starting namespace attribute notice tests for all controllers... 00:22:03.599 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:03.599 aer_cb - Changed Namespace 00:22:03.599 Cleaning up... 00:22:03.599 [ 00:22:03.599 { 00:22:03.599 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:03.599 "subtype": "Discovery", 00:22:03.599 "listen_addresses": [], 00:22:03.599 "allow_any_host": true, 00:22:03.599 "hosts": [] 00:22:03.599 }, 00:22:03.599 { 00:22:03.599 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:22:03.599 "subtype": "NVMe", 00:22:03.599 "listen_addresses": [ 00:22:03.599 { 00:22:03.599 "trtype": "VFIOUSER", 00:22:03.599 "adrfam": "IPv4", 00:22:03.599 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:22:03.599 "trsvcid": "0" 00:22:03.599 } 00:22:03.599 ], 00:22:03.599 "allow_any_host": true, 00:22:03.599 "hosts": [], 00:22:03.599 "serial_number": "SPDK1", 00:22:03.599 "model_number": "SPDK bdev Controller", 00:22:03.599 "max_namespaces": 32, 00:22:03.599 "min_cntlid": 1, 00:22:03.599 "max_cntlid": 65519, 00:22:03.599 "namespaces": [ 00:22:03.599 { 00:22:03.599 "nsid": 1, 00:22:03.599 "bdev_name": "Malloc1", 00:22:03.599 "name": "Malloc1", 00:22:03.599 "nguid": "1C5C917E11F24EE3AA183E310CFDBCB8", 00:22:03.599 "uuid": "1c5c917e-11f2-4ee3-aa18-3e310cfdbcb8" 00:22:03.599 }, 00:22:03.599 { 00:22:03.599 "nsid": 2, 00:22:03.599 "bdev_name": "Malloc3", 00:22:03.599 "name": "Malloc3", 00:22:03.599 "nguid": "1C3335DA7D9F49BFAFE8AB9C3E1AE03C", 00:22:03.599 "uuid": "1c3335da-7d9f-49bf-afe8-ab9c3e1ae03c" 00:22:03.599 } 00:22:03.599 ] 00:22:03.599 }, 00:22:03.599 { 00:22:03.599 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:22:03.599 "subtype": "NVMe", 00:22:03.599 "listen_addresses": [ 00:22:03.599 { 00:22:03.599 "trtype": "VFIOUSER", 00:22:03.599 "adrfam": "IPv4", 00:22:03.599 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:22:03.599 "trsvcid": "0" 00:22:03.599 } 00:22:03.599 ], 00:22:03.599 "allow_any_host": true, 00:22:03.599 "hosts": [], 00:22:03.599 "serial_number": "SPDK2", 00:22:03.599 "model_number": "SPDK bdev Controller", 00:22:03.599 "max_namespaces": 32, 00:22:03.599 "min_cntlid": 1, 00:22:03.599 "max_cntlid": 65519, 00:22:03.599 "namespaces": [ 00:22:03.599 { 00:22:03.599 "nsid": 1, 00:22:03.599 "bdev_name": "Malloc2", 00:22:03.599 "name": "Malloc2", 00:22:03.599 "nguid": "AF2BB7FDCA87451DAC8AAFEAA5A774A8", 00:22:03.599 "uuid": "af2bb7fd-ca87-451d-ac8a-afeaa5a774a8" 00:22:03.599 } 00:22:03.599 ] 00:22:03.599 } 00:22:03.599 ] 00:22:03.599 11:30:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3896115 00:22:03.599 11:30:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:22:03.599 11:30:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:22:03.599 11:30:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:22:03.599 11:30:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:22:03.860 [2024-06-10 11:30:28.720807] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:22:03.860 [2024-06-10 11:30:28.720852] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896219 ] 00:22:03.860 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.860 [2024-06-10 11:30:28.756944] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:22:03.860 [2024-06-10 11:30:28.766319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:22:03.860 [2024-06-10 11:30:28.766347] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5e2bb73000 00:22:03.860 [2024-06-10 11:30:28.767313] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:22:03.860 [2024-06-10 11:30:28.768319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:22:03.860 [2024-06-10 11:30:28.769322] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:22:03.860 [2024-06-10 11:30:28.770327] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:22:03.860 [2024-06-10 11:30:28.771337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:22:03.860 [2024-06-10 11:30:28.772341] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:22:03.860 [2024-06-10 11:30:28.773348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:22:03.860 [2024-06-10 11:30:28.774353] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:22:03.860 [2024-06-10 11:30:28.775359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:22:03.860 [2024-06-10 11:30:28.775378] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5e2bb68000 00:22:03.860 [2024-06-10 11:30:28.776627] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:22:03.860 [2024-06-10 11:30:28.796211] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:22:03.860 [2024-06-10 11:30:28.796236] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:22:03.860 [2024-06-10 11:30:28.798319] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:22:03.860 [2024-06-10 11:30:28.798367] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:22:03.860 [2024-06-10 11:30:28.798457] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:22:03.860 [2024-06-10 11:30:28.798480] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:22:03.860 [2024-06-10 11:30:28.798490] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:22:03.860 [2024-06-10 11:30:28.799328] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:22:03.860 [2024-06-10 11:30:28.799342] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:22:03.860 [2024-06-10 11:30:28.799354] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:22:03.860 [2024-06-10 11:30:28.800341] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:22:03.860 [2024-06-10 11:30:28.800355] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:22:03.860 [2024-06-10 11:30:28.800373] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:22:03.860 [2024-06-10 11:30:28.801344] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:22:03.860 [2024-06-10 11:30:28.801357] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:03.860 [2024-06-10 11:30:28.802355] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:22:03.860 [2024-06-10 11:30:28.802369] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:22:03.860 [2024-06-10 11:30:28.802378] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:22:03.860 [2024-06-10 11:30:28.802389] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:03.860 [2024-06-10 11:30:28.802498] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:22:03.860 [2024-06-10 11:30:28.802507] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:03.860 [2024-06-10 11:30:28.802516] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:22:03.860 [2024-06-10 11:30:28.803365] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:22:03.860 [2024-06-10 11:30:28.804373] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:22:03.860 [2024-06-10 11:30:28.805380] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:22:03.860 [2024-06-10 11:30:28.806384] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:22:03.860 [2024-06-10 11:30:28.806445] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:03.860 [2024-06-10 11:30:28.807405] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:22:03.860 [2024-06-10 11:30:28.807426] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:03.860 [2024-06-10 11:30:28.807436] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:22:03.860 [2024-06-10 11:30:28.807463] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:22:03.860 [2024-06-10 11:30:28.807482] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:22:03.860 [2024-06-10 11:30:28.807502] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:22:03.860 [2024-06-10 11:30:28.807511] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:22:03.860 [2024-06-10 11:30:28.807528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:22:03.860 [2024-06-10 11:30:28.813587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:22:03.860 [2024-06-10 11:30:28.813607] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:22:03.860 [2024-06-10 11:30:28.813616] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:22:03.860 [2024-06-10 11:30:28.813624] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:22:03.860 [2024-06-10 11:30:28.813633] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:22:03.860 [2024-06-10 11:30:28.813642] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:22:03.860 [2024-06-10 11:30:28.813650] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:22:03.860 [2024-06-10 11:30:28.813658] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:22:03.860 [2024-06-10 11:30:28.813674] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:22:03.860 [2024-06-10 11:30:28.813690] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:22:03.860 [2024-06-10 11:30:28.821584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:22:03.860 [2024-06-10 11:30:28.821603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.860 [2024-06-10 11:30:28.821616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.860 [2024-06-10 11:30:28.821628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.860 [2024-06-10 11:30:28.821641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.861 [2024-06-10 11:30:28.821649] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.821664] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.821678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:22:03.861 [2024-06-10 11:30:28.829584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:22:03.861 [2024-06-10 11:30:28.829596] nvme_ctrlr.c:2945:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:22:03.861 [2024-06-10 11:30:28.829606] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.829617] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.829629] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.829642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:22:03.861 [2024-06-10 11:30:28.837582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:22:03.861 [2024-06-10 11:30:28.837644] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.837660] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.837672] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:22:03.861 [2024-06-10 11:30:28.837680] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:22:03.861 [2024-06-10 11:30:28.837691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:22:03.861 [2024-06-10 11:30:28.845584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:22:03.861 [2024-06-10 11:30:28.845604] nvme_ctrlr.c:4612:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:22:03.861 [2024-06-10 11:30:28.845617] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.845629] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.845640] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:22:03.861 [2024-06-10 11:30:28.845648] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:22:03.861 [2024-06-10 11:30:28.845658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:22:03.861 [2024-06-10 11:30:28.853583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:22:03.861 [2024-06-10 11:30:28.853600] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.853612] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.853624] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:22:03.861 [2024-06-10 11:30:28.853632] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:22:03.861 [2024-06-10 11:30:28.853642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:22:03.861 [2024-06-10 11:30:28.861583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:22:03.861 [2024-06-10 11:30:28.861598] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.861609] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.861625] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.861634] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.861643] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.861651] nvme_ctrlr.c:3045:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:22:03.861 [2024-06-10 11:30:28.861660] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:22:03.861 [2024-06-10 11:30:28.861668] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:22:03.861 [2024-06-10 11:30:28.861696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:22:03.861 [2024-06-10 11:30:28.869585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:22:03.861 [2024-06-10 11:30:28.869606] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:22:03.861 [2024-06-10 11:30:28.877585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:22:03.861 [2024-06-10 11:30:28.877605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:22:03.861 [2024-06-10 11:30:28.885589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:22:03.861 [2024-06-10 11:30:28.885609] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:22:03.861 [2024-06-10 11:30:28.893582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:22:03.861 [2024-06-10 11:30:28.893602] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:22:03.861 [2024-06-10 11:30:28.893610] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:22:03.861 [2024-06-10 11:30:28.893617] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:22:03.861 [2024-06-10 11:30:28.893623] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:22:03.861 [2024-06-10 11:30:28.893633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:22:03.861 [2024-06-10 11:30:28.893644] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:22:03.861 [2024-06-10 11:30:28.893652] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:22:03.861 [2024-06-10 11:30:28.893661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:22:03.861 [2024-06-10 11:30:28.893672] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:22:03.861 [2024-06-10 11:30:28.893680] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:22:03.861 [2024-06-10 11:30:28.893689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:22:03.861 [2024-06-10 11:30:28.893704] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:22:03.861 [2024-06-10 11:30:28.893712] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:22:03.861 [2024-06-10 11:30:28.893722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:22:03.861 [2024-06-10 11:30:28.901581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:22:03.861 [2024-06-10 11:30:28.901603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:22:03.861 [2024-06-10 11:30:28.901618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:22:03.861 [2024-06-10 11:30:28.901633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:22:03.861 ===================================================== 00:22:03.861 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:22:03.861 ===================================================== 00:22:03.861 Controller Capabilities/Features 00:22:03.861 ================================ 00:22:03.861 Vendor ID: 4e58 00:22:03.861 Subsystem Vendor ID: 4e58 00:22:03.861 Serial Number: SPDK2 00:22:03.861 Model Number: SPDK bdev Controller 00:22:03.861 Firmware Version: 24.09 00:22:03.861 Recommended Arb Burst: 6 00:22:03.861 IEEE OUI Identifier: 8d 6b 50 00:22:03.861 Multi-path I/O 00:22:03.861 May have multiple subsystem ports: Yes 00:22:03.861 May have multiple controllers: Yes 00:22:03.861 Associated with SR-IOV VF: No 00:22:03.861 Max Data Transfer Size: 131072 00:22:03.861 Max Number of Namespaces: 32 00:22:03.861 Max Number of I/O Queues: 127 00:22:03.861 NVMe Specification Version (VS): 1.3 00:22:03.861 NVMe Specification Version (Identify): 1.3 00:22:03.861 Maximum Queue Entries: 256 00:22:03.861 Contiguous Queues Required: Yes 00:22:03.861 Arbitration Mechanisms Supported 00:22:03.861 Weighted Round Robin: Not Supported 00:22:03.861 Vendor Specific: Not Supported 00:22:03.861 Reset Timeout: 15000 ms 00:22:03.861 Doorbell Stride: 4 bytes 00:22:03.861 NVM Subsystem Reset: Not Supported 00:22:03.861 Command Sets Supported 00:22:03.861 NVM Command Set: Supported 00:22:03.861 Boot Partition: Not Supported 00:22:03.861 Memory Page Size Minimum: 4096 bytes 00:22:03.861 Memory Page Size Maximum: 4096 bytes 00:22:03.861 Persistent Memory Region: Not Supported 00:22:03.861 Optional Asynchronous Events Supported 00:22:03.861 Namespace Attribute Notices: Supported 00:22:03.861 Firmware Activation Notices: Not Supported 00:22:03.861 ANA Change Notices: Not Supported 00:22:03.861 PLE Aggregate Log Change Notices: Not Supported 00:22:03.861 LBA Status Info Alert Notices: Not Supported 00:22:03.861 EGE Aggregate Log Change Notices: Not Supported 00:22:03.861 Normal NVM Subsystem Shutdown event: Not Supported 00:22:03.862 Zone Descriptor Change Notices: Not Supported 00:22:03.862 Discovery Log Change Notices: Not Supported 00:22:03.862 Controller Attributes 00:22:03.862 128-bit Host Identifier: Supported 00:22:03.862 Non-Operational Permissive Mode: Not Supported 00:22:03.862 NVM Sets: Not Supported 00:22:03.862 Read Recovery Levels: Not Supported 00:22:03.862 Endurance Groups: Not Supported 00:22:03.862 Predictable Latency Mode: Not Supported 00:22:03.862 Traffic Based Keep ALive: Not Supported 00:22:03.862 Namespace Granularity: Not Supported 00:22:03.862 SQ Associations: Not Supported 00:22:03.862 UUID List: Not Supported 00:22:03.862 Multi-Domain Subsystem: Not Supported 00:22:03.862 Fixed Capacity Management: Not Supported 00:22:03.862 Variable Capacity Management: Not Supported 00:22:03.862 Delete Endurance Group: Not Supported 00:22:03.862 Delete NVM Set: Not Supported 00:22:03.862 Extended LBA Formats Supported: Not Supported 00:22:03.862 Flexible Data Placement Supported: Not Supported 00:22:03.862 00:22:03.862 Controller Memory Buffer Support 00:22:03.862 ================================ 00:22:03.862 Supported: No 00:22:03.862 00:22:03.862 Persistent Memory Region Support 00:22:03.862 ================================ 00:22:03.862 Supported: No 00:22:03.862 00:22:03.862 Admin Command Set Attributes 00:22:03.862 ============================ 00:22:03.862 Security Send/Receive: Not Supported 00:22:03.862 Format NVM: Not Supported 00:22:03.862 Firmware Activate/Download: Not Supported 00:22:03.862 Namespace Management: Not Supported 00:22:03.862 Device Self-Test: Not Supported 00:22:03.862 Directives: Not Supported 00:22:03.862 NVMe-MI: Not Supported 00:22:03.862 Virtualization Management: Not Supported 00:22:03.862 Doorbell Buffer Config: Not Supported 00:22:03.862 Get LBA Status Capability: Not Supported 00:22:03.862 Command & Feature Lockdown Capability: Not Supported 00:22:03.862 Abort Command Limit: 4 00:22:03.862 Async Event Request Limit: 4 00:22:03.862 Number of Firmware Slots: N/A 00:22:03.862 Firmware Slot 1 Read-Only: N/A 00:22:03.862 Firmware Activation Without Reset: N/A 00:22:03.862 Multiple Update Detection Support: N/A 00:22:03.862 Firmware Update Granularity: No Information Provided 00:22:03.862 Per-Namespace SMART Log: No 00:22:03.862 Asymmetric Namespace Access Log Page: Not Supported 00:22:03.862 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:22:03.862 Command Effects Log Page: Supported 00:22:03.862 Get Log Page Extended Data: Supported 00:22:03.862 Telemetry Log Pages: Not Supported 00:22:03.862 Persistent Event Log Pages: Not Supported 00:22:03.862 Supported Log Pages Log Page: May Support 00:22:03.862 Commands Supported & Effects Log Page: Not Supported 00:22:03.862 Feature Identifiers & Effects Log Page:May Support 00:22:03.862 NVMe-MI Commands & Effects Log Page: May Support 00:22:03.862 Data Area 4 for Telemetry Log: Not Supported 00:22:03.862 Error Log Page Entries Supported: 128 00:22:03.862 Keep Alive: Supported 00:22:03.862 Keep Alive Granularity: 10000 ms 00:22:03.862 00:22:03.862 NVM Command Set Attributes 00:22:03.862 ========================== 00:22:03.862 Submission Queue Entry Size 00:22:03.862 Max: 64 00:22:03.862 Min: 64 00:22:03.862 Completion Queue Entry Size 00:22:03.862 Max: 16 00:22:03.862 Min: 16 00:22:03.862 Number of Namespaces: 32 00:22:03.862 Compare Command: Supported 00:22:03.862 Write Uncorrectable Command: Not Supported 00:22:03.862 Dataset Management Command: Supported 00:22:03.862 Write Zeroes Command: Supported 00:22:03.862 Set Features Save Field: Not Supported 00:22:03.862 Reservations: Not Supported 00:22:03.862 Timestamp: Not Supported 00:22:03.862 Copy: Supported 00:22:03.862 Volatile Write Cache: Present 00:22:03.862 Atomic Write Unit (Normal): 1 00:22:03.862 Atomic Write Unit (PFail): 1 00:22:03.862 Atomic Compare & Write Unit: 1 00:22:03.862 Fused Compare & Write: Supported 00:22:03.862 Scatter-Gather List 00:22:03.862 SGL Command Set: Supported (Dword aligned) 00:22:03.862 SGL Keyed: Not Supported 00:22:03.862 SGL Bit Bucket Descriptor: Not Supported 00:22:03.862 SGL Metadata Pointer: Not Supported 00:22:03.862 Oversized SGL: Not Supported 00:22:03.862 SGL Metadata Address: Not Supported 00:22:03.862 SGL Offset: Not Supported 00:22:03.862 Transport SGL Data Block: Not Supported 00:22:03.862 Replay Protected Memory Block: Not Supported 00:22:03.862 00:22:03.862 Firmware Slot Information 00:22:03.862 ========================= 00:22:03.862 Active slot: 1 00:22:03.862 Slot 1 Firmware Revision: 24.09 00:22:03.862 00:22:03.862 00:22:03.862 Commands Supported and Effects 00:22:03.862 ============================== 00:22:03.862 Admin Commands 00:22:03.862 -------------- 00:22:03.862 Get Log Page (02h): Supported 00:22:03.862 Identify (06h): Supported 00:22:03.862 Abort (08h): Supported 00:22:03.862 Set Features (09h): Supported 00:22:03.862 Get Features (0Ah): Supported 00:22:03.862 Asynchronous Event Request (0Ch): Supported 00:22:03.862 Keep Alive (18h): Supported 00:22:03.862 I/O Commands 00:22:03.862 ------------ 00:22:03.862 Flush (00h): Supported LBA-Change 00:22:03.862 Write (01h): Supported LBA-Change 00:22:03.862 Read (02h): Supported 00:22:03.862 Compare (05h): Supported 00:22:03.862 Write Zeroes (08h): Supported LBA-Change 00:22:03.862 Dataset Management (09h): Supported LBA-Change 00:22:03.862 Copy (19h): Supported LBA-Change 00:22:03.862 Unknown (79h): Supported LBA-Change 00:22:03.862 Unknown (7Ah): Supported 00:22:03.862 00:22:03.862 Error Log 00:22:03.862 ========= 00:22:03.862 00:22:03.862 Arbitration 00:22:03.862 =========== 00:22:03.862 Arbitration Burst: 1 00:22:03.862 00:22:03.862 Power Management 00:22:03.862 ================ 00:22:03.862 Number of Power States: 1 00:22:03.862 Current Power State: Power State #0 00:22:03.862 Power State #0: 00:22:03.862 Max Power: 0.00 W 00:22:03.862 Non-Operational State: Operational 00:22:03.862 Entry Latency: Not Reported 00:22:03.862 Exit Latency: Not Reported 00:22:03.862 Relative Read Throughput: 0 00:22:03.862 Relative Read Latency: 0 00:22:03.862 Relative Write Throughput: 0 00:22:03.862 Relative Write Latency: 0 00:22:03.862 Idle Power: Not Reported 00:22:03.862 Active Power: Not Reported 00:22:03.862 Non-Operational Permissive Mode: Not Supported 00:22:03.862 00:22:03.862 Health Information 00:22:03.862 ================== 00:22:03.862 Critical Warnings: 00:22:03.862 Available Spare Space: OK 00:22:03.862 Temperature: OK 00:22:03.862 Device Reliability: OK 00:22:03.862 Read Only: No 00:22:03.862 Volatile Memory Backup: OK 00:22:03.862 Current Temperature: 0 Kelvin (-2[2024-06-10 11:30:28.901756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:22:03.862 [2024-06-10 11:30:28.909583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:22:03.862 [2024-06-10 11:30:28.909633] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:22:03.862 [2024-06-10 11:30:28.909648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.862 [2024-06-10 11:30:28.909659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.862 [2024-06-10 11:30:28.909670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.862 [2024-06-10 11:30:28.909681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.862 [2024-06-10 11:30:28.909758] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:22:03.862 [2024-06-10 11:30:28.909774] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:22:03.862 [2024-06-10 11:30:28.910759] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:22:03.862 [2024-06-10 11:30:28.910830] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:22:03.862 [2024-06-10 11:30:28.910845] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:22:03.862 [2024-06-10 11:30:28.911771] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:22:03.862 [2024-06-10 11:30:28.911790] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:22:03.862 [2024-06-10 11:30:28.911850] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:22:03.862 [2024-06-10 11:30:28.913147] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:22:03.862 73 Celsius) 00:22:03.862 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:03.862 Available Spare: 0% 00:22:03.862 Available Spare Threshold: 0% 00:22:03.862 Life Percentage Used: 0% 00:22:03.862 Data Units Read: 0 00:22:03.862 Data Units Written: 0 00:22:03.862 Host Read Commands: 0 00:22:03.862 Host Write Commands: 0 00:22:03.863 Controller Busy Time: 0 minutes 00:22:03.863 Power Cycles: 0 00:22:03.863 Power On Hours: 0 hours 00:22:03.863 Unsafe Shutdowns: 0 00:22:03.863 Unrecoverable Media Errors: 0 00:22:03.863 Lifetime Error Log Entries: 0 00:22:03.863 Warning Temperature Time: 0 minutes 00:22:03.863 Critical Temperature Time: 0 minutes 00:22:03.863 00:22:03.863 Number of Queues 00:22:03.863 ================ 00:22:03.863 Number of I/O Submission Queues: 127 00:22:03.863 Number of I/O Completion Queues: 127 00:22:03.863 00:22:03.863 Active Namespaces 00:22:03.863 ================= 00:22:03.863 Namespace ID:1 00:22:03.863 Error Recovery Timeout: Unlimited 00:22:03.863 Command Set Identifier: NVM (00h) 00:22:03.863 Deallocate: Supported 00:22:03.863 Deallocated/Unwritten Error: Not Supported 00:22:03.863 Deallocated Read Value: Unknown 00:22:03.863 Deallocate in Write Zeroes: Not Supported 00:22:03.863 Deallocated Guard Field: 0xFFFF 00:22:03.863 Flush: Supported 00:22:03.863 Reservation: Supported 00:22:03.863 Namespace Sharing Capabilities: Multiple Controllers 00:22:03.863 Size (in LBAs): 131072 (0GiB) 00:22:03.863 Capacity (in LBAs): 131072 (0GiB) 00:22:03.863 Utilization (in LBAs): 131072 (0GiB) 00:22:03.863 NGUID: AF2BB7FDCA87451DAC8AAFEAA5A774A8 00:22:03.863 UUID: af2bb7fd-ca87-451d-ac8a-afeaa5a774a8 00:22:03.863 Thin Provisioning: Not Supported 00:22:03.863 Per-NS Atomic Units: Yes 00:22:03.863 Atomic Boundary Size (Normal): 0 00:22:03.863 Atomic Boundary Size (PFail): 0 00:22:03.863 Atomic Boundary Offset: 0 00:22:03.863 Maximum Single Source Range Length: 65535 00:22:03.863 Maximum Copy Length: 65535 00:22:03.863 Maximum Source Range Count: 1 00:22:03.863 NGUID/EUI64 Never Reused: No 00:22:03.863 Namespace Write Protected: No 00:22:03.863 Number of LBA Formats: 1 00:22:03.863 Current LBA Format: LBA Format #00 00:22:03.863 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:03.863 00:22:04.121 11:30:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:22:04.122 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.122 [2024-06-10 11:30:29.155493] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:22:09.392 Initializing NVMe Controllers 00:22:09.392 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:22:09.392 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:22:09.392 Initialization complete. Launching workers. 00:22:09.392 ======================================================== 00:22:09.392 Latency(us) 00:22:09.392 Device Information : IOPS MiB/s Average min max 00:22:09.392 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40067.20 156.51 3193.69 993.30 6617.50 00:22:09.392 ======================================================== 00:22:09.392 Total : 40067.20 156.51 3193.69 993.30 6617.50 00:22:09.392 00:22:09.392 [2024-06-10 11:30:34.258877] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:22:09.392 11:30:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:22:09.392 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.651 [2024-06-10 11:30:34.515722] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:22:14.927 Initializing NVMe Controllers 00:22:14.927 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:22:14.927 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:22:14.927 Initialization complete. Launching workers. 00:22:14.927 ======================================================== 00:22:14.927 Latency(us) 00:22:14.927 Device Information : IOPS MiB/s Average min max 00:22:14.927 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30801.63 120.32 4154.92 1269.08 10473.91 00:22:14.927 ======================================================== 00:22:14.927 Total : 30801.63 120.32 4154.92 1269.08 10473.91 00:22:14.927 00:22:14.927 [2024-06-10 11:30:39.536328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:22:14.927 11:30:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:22:14.927 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.927 [2024-06-10 11:30:39.840546] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:22:20.201 [2024-06-10 11:30:44.974694] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:22:20.201 Initializing NVMe Controllers 00:22:20.201 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:22:20.201 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:22:20.201 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:22:20.201 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:22:20.201 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:22:20.201 Initialization complete. Launching workers. 00:22:20.201 Starting thread on core 2 00:22:20.201 Starting thread on core 3 00:22:20.201 Starting thread on core 1 00:22:20.201 11:30:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:22:20.201 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.461 [2024-06-10 11:30:45.356114] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:22:23.753 [2024-06-10 11:30:48.415811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:22:23.753 Initializing NVMe Controllers 00:22:23.753 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:22:23.753 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:22:23.753 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:22:23.753 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:22:23.753 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:22:23.753 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:22:23.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:22:23.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:22:23.754 Initialization complete. Launching workers. 00:22:23.754 Starting thread on core 1 with urgent priority queue 00:22:23.754 Starting thread on core 2 with urgent priority queue 00:22:23.754 Starting thread on core 3 with urgent priority queue 00:22:23.754 Starting thread on core 0 with urgent priority queue 00:22:23.754 SPDK bdev Controller (SPDK2 ) core 0: 8702.33 IO/s 11.49 secs/100000 ios 00:22:23.754 SPDK bdev Controller (SPDK2 ) core 1: 10022.33 IO/s 9.98 secs/100000 ios 00:22:23.754 SPDK bdev Controller (SPDK2 ) core 2: 9072.00 IO/s 11.02 secs/100000 ios 00:22:23.754 SPDK bdev Controller (SPDK2 ) core 3: 10123.33 IO/s 9.88 secs/100000 ios 00:22:23.754 ======================================================== 00:22:23.754 00:22:23.754 11:30:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:22:23.754 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.754 [2024-06-10 11:30:48.783813] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:22:23.754 Initializing NVMe Controllers 00:22:23.754 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:22:23.754 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:22:23.754 Namespace ID: 1 size: 0GB 00:22:23.754 Initialization complete. 00:22:23.754 INFO: using host memory buffer for IO 00:22:23.754 Hello world! 00:22:23.754 [2024-06-10 11:30:48.792874] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:22:23.754 11:30:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:22:24.013 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.272 [2024-06-10 11:30:49.160236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:22:25.210 Initializing NVMe Controllers 00:22:25.210 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:22:25.210 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:22:25.210 Initialization complete. Launching workers. 00:22:25.210 submit (in ns) avg, min, max = 7919.0, 4047.2, 4001729.6 00:22:25.210 complete (in ns) avg, min, max = 26383.0, 2398.4, 4002953.6 00:22:25.210 00:22:25.210 Submit histogram 00:22:25.210 ================ 00:22:25.210 Range in us Cumulative Count 00:22:25.210 4.045 - 4.070: 0.3037% ( 40) 00:22:25.210 4.070 - 4.096: 2.9530% ( 349) 00:22:25.210 4.096 - 4.122: 8.2214% ( 694) 00:22:25.210 4.122 - 4.147: 17.0500% ( 1163) 00:22:25.210 4.147 - 4.173: 26.0761% ( 1189) 00:22:25.210 4.173 - 4.198: 32.4072% ( 834) 00:22:25.210 4.198 - 4.224: 38.5182% ( 805) 00:22:25.210 4.224 - 4.250: 45.2592% ( 888) 00:22:25.210 4.250 - 4.275: 57.2990% ( 1586) 00:22:25.210 4.275 - 4.301: 71.1000% ( 1818) 00:22:25.210 4.301 - 4.326: 79.2834% ( 1078) 00:22:25.210 4.326 - 4.352: 84.0279% ( 625) 00:22:25.210 4.352 - 4.378: 86.2825% ( 297) 00:22:25.210 4.378 - 4.403: 87.4061% ( 148) 00:22:25.210 4.403 - 4.429: 88.3398% ( 123) 00:22:25.210 4.429 - 4.454: 89.8049% ( 193) 00:22:25.210 4.454 - 4.480: 90.9284% ( 148) 00:22:25.210 4.480 - 4.506: 92.1278% ( 158) 00:22:25.210 4.506 - 4.531: 93.7827% ( 218) 00:22:25.210 4.531 - 4.557: 95.7717% ( 262) 00:22:25.210 4.557 - 4.582: 96.8952% ( 148) 00:22:25.210 4.582 - 4.608: 97.9731% ( 142) 00:22:25.210 4.608 - 4.634: 98.6488% ( 89) 00:22:25.210 4.634 - 4.659: 99.1118% ( 61) 00:22:25.210 4.659 - 4.685: 99.3699% ( 34) 00:22:25.210 4.685 - 4.710: 99.4762% ( 14) 00:22:25.210 4.710 - 4.736: 99.5217% ( 6) 00:22:25.210 4.736 - 4.762: 99.5521% ( 4) 00:22:25.210 4.762 - 4.787: 99.5673% ( 2) 00:22:25.210 6.861 - 6.912: 99.5749% ( 1) 00:22:25.210 6.963 - 7.014: 99.5825% ( 1) 00:22:25.210 7.014 - 7.066: 99.5901% ( 1) 00:22:25.210 7.066 - 7.117: 99.6053% ( 2) 00:22:25.210 7.117 - 7.168: 99.6280% ( 3) 00:22:25.210 7.168 - 7.219: 99.6508% ( 3) 00:22:25.210 7.219 - 7.270: 99.6584% ( 1) 00:22:25.210 7.322 - 7.373: 99.6660% ( 1) 00:22:25.210 7.373 - 7.424: 99.6736% ( 1) 00:22:25.210 7.424 - 7.475: 99.6963% ( 3) 00:22:25.210 7.475 - 7.526: 99.7039% ( 1) 00:22:25.210 7.578 - 7.629: 99.7115% ( 1) 00:22:25.210 7.629 - 7.680: 99.7191% ( 1) 00:22:25.210 7.680 - 7.731: 99.7267% ( 1) 00:22:25.210 7.731 - 7.782: 99.7343% ( 1) 00:22:25.210 7.782 - 7.834: 99.7495% ( 2) 00:22:25.210 7.885 - 7.936: 99.7647% ( 2) 00:22:25.210 7.936 - 7.987: 99.7723% ( 1) 00:22:25.210 8.294 - 8.346: 99.7799% ( 1) 00:22:25.210 8.397 - 8.448: 99.8026% ( 3) 00:22:25.210 8.448 - 8.499: 99.8102% ( 1) 00:22:25.210 8.550 - 8.602: 99.8178% ( 1) 00:22:25.210 8.653 - 8.704: 99.8254% ( 1) 00:22:25.210 8.704 - 8.755: 99.8330% ( 1) 00:22:25.210 8.755 - 8.806: 99.8406% ( 1) 00:22:25.210 8.909 - 8.960: 99.8482% ( 1) 00:22:25.210 9.216 - 9.267: 99.8634% ( 2) 00:22:25.210 9.421 - 9.472: 99.8785% ( 2) 00:22:25.210 9.574 - 9.626: 99.8861% ( 1) 00:22:25.210 9.728 - 9.779: 99.8937% ( 1) 00:22:25.210 10.291 - 10.342: 99.9013% ( 1) 00:22:25.210 11.366 - 11.418: 99.9089% ( 1) 00:22:25.210 3984.589 - 4010.803: 100.0000% ( 12) 00:22:25.210 00:22:25.210 Complete histogram 00:22:25.210 ================== 00:22:25.210 Range in us Cumulative Count 00:22:25.210 2.394 - 2.406: 0.1063% ( 14) 00:22:25.210 2.406 - 2.419: 4.5624% ( 587) 00:22:25.210 2.419 - 2.432: 29.6819% ( 3309) 00:22:25.210 2.432 - 2.445: 60.1230% ( 4010) 00:22:25.210 2.445 - 2.458: 69.8854% ( 1286) 00:22:25.210 2.458 - 2.470: 76.7858% ( 909) 00:22:25.210 2.470 - 2.483: 85.5082% ( 1149) 00:22:25.210 2.483 - 2.496: 89.8429% ( 571) 00:22:25.210 2.496 - 2.509: 92.2948% ( 323) 00:22:25.210 2.509 - 2.522: 94.5419% ( 296) 00:22:25.210 2.522 - 2.534: 95.9007% ( 179) 00:22:25.210 2.534 - 2.547: 97.0242% ( 148) 00:22:25.210 2.547 - 2.560: 97.9807% ( 126) 00:22:25.210 2.560 - 2.573: 98.5652% ( 77) 00:22:25.210 2.573 - 2.586: 98.8006% ( 31) 00:22:25.210 2.586 - [2024-06-10 11:30:50.254515] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:22:25.210 2.598: 98.8917% ( 12) 00:22:25.210 2.598 - 2.611: 98.9980% ( 14) 00:22:25.210 2.611 - 2.624: 99.0511% ( 7) 00:22:25.210 2.624 - 2.637: 99.0890% ( 5) 00:22:25.210 2.637 - 2.650: 99.1118% ( 3) 00:22:25.210 2.650 - 2.662: 99.1346% ( 3) 00:22:25.210 2.662 - 2.675: 99.1650% ( 4) 00:22:25.210 2.688 - 2.701: 99.1725% ( 1) 00:22:25.210 2.701 - 2.714: 99.1801% ( 1) 00:22:25.210 2.726 - 2.739: 99.1877% ( 1) 00:22:25.210 2.778 - 2.790: 99.1953% ( 1) 00:22:25.210 3.059 - 3.072: 99.2029% ( 1) 00:22:25.210 4.787 - 4.813: 99.2105% ( 1) 00:22:25.210 5.069 - 5.094: 99.2181% ( 1) 00:22:25.210 5.094 - 5.120: 99.2257% ( 1) 00:22:25.210 5.146 - 5.171: 99.2333% ( 1) 00:22:25.210 5.453 - 5.478: 99.2409% ( 1) 00:22:25.210 5.478 - 5.504: 99.2485% ( 1) 00:22:25.210 5.530 - 5.555: 99.2561% ( 1) 00:22:25.210 5.555 - 5.581: 99.2636% ( 1) 00:22:25.210 5.581 - 5.606: 99.2712% ( 1) 00:22:25.210 5.606 - 5.632: 99.2788% ( 1) 00:22:25.210 5.658 - 5.683: 99.2864% ( 1) 00:22:25.210 5.709 - 5.734: 99.2940% ( 1) 00:22:25.210 6.093 - 6.118: 99.3016% ( 1) 00:22:25.210 6.195 - 6.221: 99.3168% ( 2) 00:22:25.210 6.349 - 6.374: 99.3244% ( 1) 00:22:25.210 6.374 - 6.400: 99.3320% ( 1) 00:22:25.210 6.451 - 6.477: 99.3547% ( 3) 00:22:25.210 6.477 - 6.502: 99.3623% ( 1) 00:22:25.210 7.526 - 7.578: 99.3775% ( 2) 00:22:25.210 8.141 - 8.192: 99.3851% ( 1) 00:22:25.210 8.499 - 8.550: 99.3927% ( 1) 00:22:25.210 15.770 - 15.872: 99.4003% ( 1) 00:22:25.210 3486.515 - 3512.730: 99.4079% ( 1) 00:22:25.210 3984.589 - 4010.803: 100.0000% ( 78) 00:22:25.210 00:22:25.210 11:30:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:22:25.210 11:30:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:22:25.210 11:30:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:22:25.210 11:30:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:22:25.210 11:30:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:22:25.470 [ 00:22:25.470 { 00:22:25.470 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:25.470 "subtype": "Discovery", 00:22:25.470 "listen_addresses": [], 00:22:25.470 "allow_any_host": true, 00:22:25.470 "hosts": [] 00:22:25.470 }, 00:22:25.470 { 00:22:25.470 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:22:25.470 "subtype": "NVMe", 00:22:25.470 "listen_addresses": [ 00:22:25.470 { 00:22:25.470 "trtype": "VFIOUSER", 00:22:25.470 "adrfam": "IPv4", 00:22:25.470 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:22:25.470 "trsvcid": "0" 00:22:25.470 } 00:22:25.470 ], 00:22:25.470 "allow_any_host": true, 00:22:25.470 "hosts": [], 00:22:25.470 "serial_number": "SPDK1", 00:22:25.470 "model_number": "SPDK bdev Controller", 00:22:25.470 "max_namespaces": 32, 00:22:25.470 "min_cntlid": 1, 00:22:25.470 "max_cntlid": 65519, 00:22:25.470 "namespaces": [ 00:22:25.470 { 00:22:25.470 "nsid": 1, 00:22:25.470 "bdev_name": "Malloc1", 00:22:25.470 "name": "Malloc1", 00:22:25.470 "nguid": "1C5C917E11F24EE3AA183E310CFDBCB8", 00:22:25.470 "uuid": "1c5c917e-11f2-4ee3-aa18-3e310cfdbcb8" 00:22:25.470 }, 00:22:25.470 { 00:22:25.470 "nsid": 2, 00:22:25.470 "bdev_name": "Malloc3", 00:22:25.470 "name": "Malloc3", 00:22:25.470 "nguid": "1C3335DA7D9F49BFAFE8AB9C3E1AE03C", 00:22:25.470 "uuid": "1c3335da-7d9f-49bf-afe8-ab9c3e1ae03c" 00:22:25.470 } 00:22:25.470 ] 00:22:25.470 }, 00:22:25.470 { 00:22:25.470 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:22:25.470 "subtype": "NVMe", 00:22:25.470 "listen_addresses": [ 00:22:25.470 { 00:22:25.470 "trtype": "VFIOUSER", 00:22:25.470 "adrfam": "IPv4", 00:22:25.470 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:22:25.470 "trsvcid": "0" 00:22:25.470 } 00:22:25.470 ], 00:22:25.470 "allow_any_host": true, 00:22:25.470 "hosts": [], 00:22:25.470 "serial_number": "SPDK2", 00:22:25.470 "model_number": "SPDK bdev Controller", 00:22:25.470 "max_namespaces": 32, 00:22:25.470 "min_cntlid": 1, 00:22:25.470 "max_cntlid": 65519, 00:22:25.470 "namespaces": [ 00:22:25.470 { 00:22:25.470 "nsid": 1, 00:22:25.470 "bdev_name": "Malloc2", 00:22:25.470 "name": "Malloc2", 00:22:25.470 "nguid": "AF2BB7FDCA87451DAC8AAFEAA5A774A8", 00:22:25.470 "uuid": "af2bb7fd-ca87-451d-ac8a-afeaa5a774a8" 00:22:25.470 } 00:22:25.470 ] 00:22:25.470 } 00:22:25.470 ] 00:22:25.470 11:30:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:25.470 11:30:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:22:25.470 11:30:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3899946 00:22:25.470 11:30:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:22:25.470 11:30:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:22:25.470 11:30:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:25.470 11:30:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:25.470 11:30:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:22:25.470 11:30:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:22:25.470 11:30:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:22:25.729 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.729 [2024-06-10 11:30:50.779028] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:22:25.729 Malloc4 00:22:25.730 11:30:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:22:25.989 [2024-06-10 11:30:51.032792] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:22:25.989 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:22:25.989 Asynchronous Event Request test 00:22:25.989 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:22:25.989 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:22:25.989 Registering asynchronous event callbacks... 00:22:25.989 Starting namespace attribute notice tests for all controllers... 00:22:25.989 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:25.989 aer_cb - Changed Namespace 00:22:25.989 Cleaning up... 00:22:26.248 [ 00:22:26.248 { 00:22:26.248 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:26.248 "subtype": "Discovery", 00:22:26.248 "listen_addresses": [], 00:22:26.248 "allow_any_host": true, 00:22:26.248 "hosts": [] 00:22:26.248 }, 00:22:26.248 { 00:22:26.248 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:22:26.248 "subtype": "NVMe", 00:22:26.248 "listen_addresses": [ 00:22:26.248 { 00:22:26.248 "trtype": "VFIOUSER", 00:22:26.248 "adrfam": "IPv4", 00:22:26.248 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:22:26.248 "trsvcid": "0" 00:22:26.248 } 00:22:26.248 ], 00:22:26.248 "allow_any_host": true, 00:22:26.248 "hosts": [], 00:22:26.248 "serial_number": "SPDK1", 00:22:26.248 "model_number": "SPDK bdev Controller", 00:22:26.248 "max_namespaces": 32, 00:22:26.248 "min_cntlid": 1, 00:22:26.248 "max_cntlid": 65519, 00:22:26.248 "namespaces": [ 00:22:26.248 { 00:22:26.248 "nsid": 1, 00:22:26.248 "bdev_name": "Malloc1", 00:22:26.248 "name": "Malloc1", 00:22:26.248 "nguid": "1C5C917E11F24EE3AA183E310CFDBCB8", 00:22:26.248 "uuid": "1c5c917e-11f2-4ee3-aa18-3e310cfdbcb8" 00:22:26.248 }, 00:22:26.248 { 00:22:26.248 "nsid": 2, 00:22:26.248 "bdev_name": "Malloc3", 00:22:26.248 "name": "Malloc3", 00:22:26.248 "nguid": "1C3335DA7D9F49BFAFE8AB9C3E1AE03C", 00:22:26.248 "uuid": "1c3335da-7d9f-49bf-afe8-ab9c3e1ae03c" 00:22:26.248 } 00:22:26.248 ] 00:22:26.248 }, 00:22:26.248 { 00:22:26.248 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:22:26.248 "subtype": "NVMe", 00:22:26.248 "listen_addresses": [ 00:22:26.248 { 00:22:26.248 "trtype": "VFIOUSER", 00:22:26.248 "adrfam": "IPv4", 00:22:26.248 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:22:26.248 "trsvcid": "0" 00:22:26.248 } 00:22:26.248 ], 00:22:26.248 "allow_any_host": true, 00:22:26.248 "hosts": [], 00:22:26.248 "serial_number": "SPDK2", 00:22:26.248 "model_number": "SPDK bdev Controller", 00:22:26.248 "max_namespaces": 32, 00:22:26.248 "min_cntlid": 1, 00:22:26.248 "max_cntlid": 65519, 00:22:26.248 "namespaces": [ 00:22:26.248 { 00:22:26.248 "nsid": 1, 00:22:26.248 "bdev_name": "Malloc2", 00:22:26.248 "name": "Malloc2", 00:22:26.248 "nguid": "AF2BB7FDCA87451DAC8AAFEAA5A774A8", 00:22:26.248 "uuid": "af2bb7fd-ca87-451d-ac8a-afeaa5a774a8" 00:22:26.248 }, 00:22:26.248 { 00:22:26.248 "nsid": 2, 00:22:26.248 "bdev_name": "Malloc4", 00:22:26.248 "name": "Malloc4", 00:22:26.248 "nguid": "79DFC319F04A454CA24521BBCB192B90", 00:22:26.248 "uuid": "79dfc319-f04a-454c-a245-21bbcb192b90" 00:22:26.248 } 00:22:26.248 ] 00:22:26.248 } 00:22:26.248 ] 00:22:26.248 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3899946 00:22:26.248 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:22:26.248 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3891246 00:22:26.248 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 3891246 ']' 00:22:26.248 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 3891246 00:22:26.248 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:22:26.248 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:26.248 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3891246 00:22:26.249 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:26.249 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:26.249 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3891246' 00:22:26.249 killing process with pid 3891246 00:22:26.249 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 3891246 00:22:26.249 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 3891246 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3900120 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3900120' 00:22:26.818 Process pid: 3900120 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3900120 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 3900120 ']' 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:26.818 11:30:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:22:26.818 [2024-06-10 11:30:51.676098] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:22:26.818 [2024-06-10 11:30:51.677302] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:22:26.818 [2024-06-10 11:30:51.677348] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.818 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.818 [2024-06-10 11:30:51.799038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:26.818 [2024-06-10 11:30:51.880986] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.818 [2024-06-10 11:30:51.881036] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.818 [2024-06-10 11:30:51.881056] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.818 [2024-06-10 11:30:51.881071] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.818 [2024-06-10 11:30:51.881084] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.818 [2024-06-10 11:30:51.881153] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.818 [2024-06-10 11:30:51.881247] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.818 [2024-06-10 11:30:51.881358] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:26.818 [2024-06-10 11:30:51.881361] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.078 [2024-06-10 11:30:51.962624] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:22:27.078 [2024-06-10 11:30:51.962729] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:22:27.078 [2024-06-10 11:30:51.963215] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:22:27.078 [2024-06-10 11:30:51.963370] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:27.078 [2024-06-10 11:30:51.963634] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:22:27.645 11:30:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:27.645 11:30:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:22:27.645 11:30:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:22:28.582 11:30:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:22:28.841 11:30:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:22:28.842 11:30:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:22:28.842 11:30:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:22:28.842 11:30:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:22:28.842 11:30:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:29.101 Malloc1 00:22:29.101 11:30:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:22:29.360 11:30:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:22:29.619 11:30:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:22:29.879 11:30:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:22:29.879 11:30:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:22:29.879 11:30:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:22:30.138 Malloc2 00:22:30.138 11:30:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:22:30.397 11:30:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:22:30.656 11:30:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:22:30.915 11:30:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:22:30.915 11:30:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3900120 00:22:30.915 11:30:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 3900120 ']' 00:22:30.915 11:30:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 3900120 00:22:30.915 11:30:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:22:30.915 11:30:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:30.916 11:30:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3900120 00:22:30.916 11:30:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:30.916 11:30:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:30.916 11:30:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3900120' 00:22:30.916 killing process with pid 3900120 00:22:30.916 11:30:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 3900120 00:22:30.916 11:30:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 3900120 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:22:31.175 00:22:31.175 real 0m54.019s 00:22:31.175 user 3m32.183s 00:22:31.175 sys 0m5.864s 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:22:31.175 ************************************ 00:22:31.175 END TEST nvmf_vfio_user 00:22:31.175 ************************************ 00:22:31.175 11:30:56 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:22:31.175 11:30:56 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:31.175 11:30:56 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:31.175 11:30:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.175 ************************************ 00:22:31.175 START TEST nvmf_vfio_user_nvme_compliance 00:22:31.175 ************************************ 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:22:31.175 * Looking for test storage... 00:22:31.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.175 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.434 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3900999 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3900999' 00:22:31.435 Process pid: 3900999 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3900999 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 3900999 ']' 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:31.435 11:30:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:31.435 [2024-06-10 11:30:56.356407] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:22:31.435 [2024-06-10 11:30:56.356473] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.435 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.435 [2024-06-10 11:30:56.481433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:31.694 [2024-06-10 11:30:56.567650] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.694 [2024-06-10 11:30:56.567698] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.694 [2024-06-10 11:30:56.567718] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.694 [2024-06-10 11:30:56.567733] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.694 [2024-06-10 11:30:56.567747] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.694 [2024-06-10 11:30:56.567817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.694 [2024-06-10 11:30:56.567840] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.694 [2024-06-10 11:30:56.567850] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.261 11:30:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:32.261 11:30:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:22:32.261 11:30:57 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:22:33.197 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:22:33.197 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:22:33.197 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:22:33.197 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.197 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:33.197 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.197 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:22:33.197 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:22:33.197 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.197 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:33.479 malloc0 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.479 11:30:58 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:22:33.479 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.479 00:22:33.479 00:22:33.479 CUnit - A unit testing framework for C - Version 2.1-3 00:22:33.479 http://cunit.sourceforge.net/ 00:22:33.479 00:22:33.479 00:22:33.479 Suite: nvme_compliance 00:22:33.758 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-10 11:30:58.592947] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:33.758 [2024-06-10 11:30:58.594401] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:22:33.758 [2024-06-10 11:30:58.594422] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:22:33.758 [2024-06-10 11:30:58.594433] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:22:33.758 [2024-06-10 11:30:58.595974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:33.758 passed 00:22:33.758 Test: admin_identify_ctrlr_verify_fused ...[2024-06-10 11:30:58.690632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:33.758 [2024-06-10 11:30:58.693649] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:33.758 passed 00:22:33.758 Test: admin_identify_ns ...[2024-06-10 11:30:58.789220] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:33.758 [2024-06-10 11:30:58.852592] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:22:33.758 [2024-06-10 11:30:58.860590] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:22:34.018 [2024-06-10 11:30:58.881727] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:34.018 passed 00:22:34.018 Test: admin_get_features_mandatory_features ...[2024-06-10 11:30:58.970217] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:34.018 [2024-06-10 11:30:58.975254] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:34.018 passed 00:22:34.018 Test: admin_get_features_optional_features ...[2024-06-10 11:30:59.068940] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:34.018 [2024-06-10 11:30:59.071961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:34.018 passed 00:22:34.277 Test: admin_set_features_number_of_queues ...[2024-06-10 11:30:59.163439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:34.277 [2024-06-10 11:30:59.266700] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:34.277 passed 00:22:34.277 Test: admin_get_log_page_mandatory_logs ...[2024-06-10 11:30:59.357166] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:34.277 [2024-06-10 11:30:59.360189] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:34.536 passed 00:22:34.536 Test: admin_get_log_page_with_lpo ...[2024-06-10 11:30:59.452222] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:34.536 [2024-06-10 11:30:59.519595] ctrlr.c:2656:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:22:34.536 [2024-06-10 11:30:59.532666] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:34.536 passed 00:22:34.536 Test: fabric_property_get ...[2024-06-10 11:30:59.625572] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:34.536 [2024-06-10 11:30:59.626898] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:22:34.536 [2024-06-10 11:30:59.628608] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:34.795 passed 00:22:34.795 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-10 11:30:59.721290] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:34.795 [2024-06-10 11:30:59.722572] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:22:34.795 [2024-06-10 11:30:59.724314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:34.795 passed 00:22:34.795 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-10 11:30:59.814218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:35.054 [2024-06-10 11:30:59.901597] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:22:35.054 [2024-06-10 11:30:59.917586] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:22:35.054 [2024-06-10 11:30:59.922689] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:35.054 passed 00:22:35.054 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-10 11:31:00.012201] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:35.054 [2024-06-10 11:31:00.013474] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:22:35.054 [2024-06-10 11:31:00.015221] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:35.054 passed 00:22:35.054 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-10 11:31:00.107834] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:35.313 [2024-06-10 11:31:00.183586] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:22:35.313 [2024-06-10 11:31:00.207590] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:22:35.313 [2024-06-10 11:31:00.212778] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:35.313 passed 00:22:35.313 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-10 11:31:00.304344] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:35.313 [2024-06-10 11:31:00.305626] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:22:35.313 [2024-06-10 11:31:00.305659] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:22:35.313 [2024-06-10 11:31:00.307362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:35.313 passed 00:22:35.313 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-10 11:31:00.397935] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:35.572 [2024-06-10 11:31:00.488586] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:22:35.572 [2024-06-10 11:31:00.496587] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:22:35.572 [2024-06-10 11:31:00.504585] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:22:35.572 [2024-06-10 11:31:00.512598] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:22:35.572 [2024-06-10 11:31:00.541682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:35.572 passed 00:22:35.572 Test: admin_create_io_sq_verify_pc ...[2024-06-10 11:31:00.633607] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:35.572 [2024-06-10 11:31:00.653599] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:22:35.572 [2024-06-10 11:31:00.670916] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:35.832 passed 00:22:35.832 Test: admin_create_io_qp_max_qps ...[2024-06-10 11:31:00.761550] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:36.769 [2024-06-10 11:31:01.872589] nvme_ctrlr.c:5384:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:22:37.337 [2024-06-10 11:31:02.268048] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:37.337 passed 00:22:37.337 Test: admin_create_io_sq_shared_cq ...[2024-06-10 11:31:02.360684] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:22:37.596 [2024-06-10 11:31:02.483590] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:22:37.597 [2024-06-10 11:31:02.520665] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:22:37.597 passed 00:22:37.597 00:22:37.597 Run Summary: Type Total Ran Passed Failed Inactive 00:22:37.597 suites 1 1 n/a 0 0 00:22:37.597 tests 18 18 18 0 0 00:22:37.597 asserts 360 360 360 0 n/a 00:22:37.597 00:22:37.597 Elapsed time = 1.643 seconds 00:22:37.597 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3900999 00:22:37.597 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 3900999 ']' 00:22:37.597 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 3900999 00:22:37.597 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:22:37.597 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:37.597 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3900999 00:22:37.597 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:37.597 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:37.597 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3900999' 00:22:37.597 killing process with pid 3900999 00:22:37.597 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 3900999 00:22:37.597 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 3900999 00:22:37.857 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:22:37.857 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:22:37.857 00:22:37.857 real 0m6.703s 00:22:37.857 user 0m18.720s 00:22:37.857 sys 0m0.820s 00:22:37.857 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:37.857 11:31:02 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:22:37.857 ************************************ 00:22:37.857 END TEST nvmf_vfio_user_nvme_compliance 00:22:37.857 ************************************ 00:22:37.857 11:31:02 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:22:37.857 11:31:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:37.857 11:31:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:37.857 11:31:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:37.857 ************************************ 00:22:37.857 START TEST nvmf_vfio_user_fuzz 00:22:37.857 ************************************ 00:22:37.857 11:31:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:22:38.116 * Looking for test storage... 00:22:38.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.116 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3902213 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3902213' 00:22:38.117 Process pid: 3902213 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3902213 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 3902213 ']' 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:38.117 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:39.054 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:39.054 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:22:39.054 11:31:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:22:39.992 11:31:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:22:39.992 11:31:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.992 11:31:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:39.992 11:31:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.992 11:31:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:22:39.992 11:31:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:22:39.992 11:31:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.992 11:31:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:39.992 malloc0 00:22:39.992 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.992 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:22:39.992 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.992 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:39.992 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.992 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:22:39.992 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.992 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:39.992 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.992 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:22:39.992 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.992 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:39.992 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.993 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:22:39.993 11:31:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:23:12.079 Fuzzing completed. Shutting down the fuzz application 00:23:12.079 00:23:12.079 Dumping successful admin opcodes: 00:23:12.079 8, 9, 10, 24, 00:23:12.079 Dumping successful io opcodes: 00:23:12.079 0, 00:23:12.079 NS: 0x200003a1ef00 I/O qp, Total commands completed: 678466, total successful commands: 2641, random_seed: 945471168 00:23:12.079 NS: 0x200003a1ef00 admin qp, Total commands completed: 158181, total successful commands: 1273, random_seed: 2916165888 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3902213 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 3902213 ']' 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 3902213 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3902213 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3902213' 00:23:12.079 killing process with pid 3902213 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 3902213 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 3902213 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:23:12.079 00:23:12.079 real 0m32.950s 00:23:12.079 user 0m30.992s 00:23:12.079 sys 0m31.779s 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:12.079 11:31:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:12.079 ************************************ 00:23:12.079 END TEST nvmf_vfio_user_fuzz 00:23:12.079 ************************************ 00:23:12.079 11:31:35 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:23:12.079 11:31:35 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:12.079 11:31:35 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:12.079 11:31:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.079 ************************************ 00:23:12.079 START TEST nvmf_host_management 00:23:12.079 ************************************ 00:23:12.079 11:31:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:23:12.079 * Looking for test storage... 00:23:12.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.079 11:31:36 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:23:12.080 11:31:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:20.206 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:20.206 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:20.206 Found net devices under 0000:af:00.0: cvl_0_0 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:20.206 Found net devices under 0000:af:00.1: cvl_0_1 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:20.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:23:20.206 00:23:20.206 --- 10.0.0.2 ping statistics --- 00:23:20.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.206 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:23:20.206 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:23:20.206 00:23:20.206 --- 10.0.0.1 ping statistics --- 00:23:20.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.206 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3911748 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3911748 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 3911748 ']' 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:20.207 11:31:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:20.207 [2024-06-10 11:31:45.026923] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:23:20.207 [2024-06-10 11:31:45.026984] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.207 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.207 [2024-06-10 11:31:45.144782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.207 [2024-06-10 11:31:45.232713] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.207 [2024-06-10 11:31:45.232757] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.207 [2024-06-10 11:31:45.232771] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.207 [2024-06-10 11:31:45.232783] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.207 [2024-06-10 11:31:45.232793] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.207 [2024-06-10 11:31:45.232900] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.207 [2024-06-10 11:31:45.233014] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.207 [2024-06-10 11:31:45.233123] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.207 [2024-06-10 11:31:45.233123] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:21.145 [2024-06-10 11:31:45.928608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.145 11:31:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:21.145 Malloc0 00:23:21.145 [2024-06-10 11:31:45.996243] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3911972 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3911972 /var/tmp/bdevperf.sock 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 3911972 ']' 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:23:21.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.145 { 00:23:21.145 "params": { 00:23:21.145 "name": "Nvme$subsystem", 00:23:21.145 "trtype": "$TEST_TRANSPORT", 00:23:21.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.145 "adrfam": "ipv4", 00:23:21.145 "trsvcid": "$NVMF_PORT", 00:23:21.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.145 "hdgst": ${hdgst:-false}, 00:23:21.145 "ddgst": ${ddgst:-false} 00:23:21.145 }, 00:23:21.145 "method": "bdev_nvme_attach_controller" 00:23:21.145 } 00:23:21.145 EOF 00:23:21.145 )") 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:23:21.145 11:31:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:21.145 "params": { 00:23:21.145 "name": "Nvme0", 00:23:21.145 "trtype": "tcp", 00:23:21.145 "traddr": "10.0.0.2", 00:23:21.145 "adrfam": "ipv4", 00:23:21.145 "trsvcid": "4420", 00:23:21.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:21.145 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:21.145 "hdgst": false, 00:23:21.145 "ddgst": false 00:23:21.145 }, 00:23:21.145 "method": "bdev_nvme_attach_controller" 00:23:21.145 }' 00:23:21.145 [2024-06-10 11:31:46.105078] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:23:21.145 [2024-06-10 11:31:46.105139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3911972 ] 00:23:21.145 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.145 [2024-06-10 11:31:46.225311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.404 [2024-06-10 11:31:46.307173] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.663 Running I/O for 10 seconds... 00:23:21.921 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:21.921 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:23:21.921 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:21.922 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.922 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:22.182 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 [2024-06-10 11:31:47.084046] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a3650 is same with the state(5) to be set 00:23:22.182 [2024-06-10 11:31:47.084591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.084628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.182 [2024-06-10 11:31:47.084652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.084666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.182 [2024-06-10 11:31:47.084682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.084695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.182 [2024-06-10 11:31:47.084711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.084724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.182 [2024-06-10 11:31:47.084739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.084752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.182 [2024-06-10 11:31:47.084766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.084780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.182 [2024-06-10 11:31:47.084806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.084819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.182 [2024-06-10 11:31:47.084833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.084846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.182 [2024-06-10 11:31:47.084862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.084874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.182 [2024-06-10 11:31:47.084889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.084901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.182 [2024-06-10 11:31:47.084916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.084930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.182 [2024-06-10 11:31:47.084945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.084957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.182 [2024-06-10 11:31:47.084971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.084984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.182 [2024-06-10 11:31:47.084999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.182 [2024-06-10 11:31:47.085012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.085972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.085987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.086000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.086015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.086028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.086043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.183 [2024-06-10 11:31:47.086056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.183 [2024-06-10 11:31:47.086071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-06-10 11:31:47.086083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.184 [2024-06-10 11:31:47.086098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-06-10 11:31:47.086112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.184 [2024-06-10 11:31:47.086127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-06-10 11:31:47.086140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.184 [2024-06-10 11:31:47.086154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-06-10 11:31:47.086168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.184 [2024-06-10 11:31:47.086183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-06-10 11:31:47.086195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.184 [2024-06-10 11:31:47.086211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-06-10 11:31:47.086225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.184 [2024-06-10 11:31:47.086240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-06-10 11:31:47.086253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.184 [2024-06-10 11:31:47.086267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-06-10 11:31:47.086280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.184 [2024-06-10 11:31:47.086296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-06-10 11:31:47.086308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.184 [2024-06-10 11:31:47.086323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-06-10 11:31:47.086336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.184 [2024-06-10 11:31:47.086351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-06-10 11:31:47.086364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.184 [2024-06-10 11:31:47.086378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-06-10 11:31:47.086391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.184 [2024-06-10 11:31:47.086406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.184 [2024-06-10 11:31:47.086419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.184 [2024-06-10 11:31:47.086433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7bd70 is same with the state(5) to be set 00:23:22.184 [2024-06-10 11:31:47.086492] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f7bd70 was disconnected and freed. reset controller. 00:23:22.184 [2024-06-10 11:31:47.087711] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:22.184 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:22.184 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:23:22.184 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:22.184 task offset: 79616 on job bdev=Nvme0n1 fails 00:23:22.184 00:23:22.184 Latency(us) 00:23:22.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.184 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.184 Job: Nvme0n1 ended in about 0.50 seconds with error 00:23:22.184 Verification LBA range: start 0x0 length 0x400 00:23:22.184 Nvme0n1 : 0.50 1162.86 72.68 129.21 0.00 48182.43 2595.23 45927.63 00:23:22.184 =================================================================================================================== 00:23:22.184 Total : 1162.86 72.68 129.21 0.00 48182.43 2595.23 45927.63 00:23:22.184 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:22.184 [2024-06-10 11:31:47.089815] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:22.184 [2024-06-10 11:31:47.089838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4a820 (9): Bad file descriptor 00:23:22.184 11:31:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:22.184 11:31:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:23:22.184 [2024-06-10 11:31:47.102900] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:23.121 11:31:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3911972 00:23:23.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3911972) - No such process 00:23:23.121 11:31:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:23:23.121 11:31:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:23:23.121 11:31:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:23.121 11:31:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:23:23.121 11:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:23:23.121 11:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:23:23.121 11:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.121 11:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.121 { 00:23:23.121 "params": { 00:23:23.121 "name": "Nvme$subsystem", 00:23:23.121 "trtype": "$TEST_TRANSPORT", 00:23:23.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.121 "adrfam": "ipv4", 00:23:23.121 "trsvcid": "$NVMF_PORT", 00:23:23.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.121 "hdgst": ${hdgst:-false}, 00:23:23.121 "ddgst": ${ddgst:-false} 00:23:23.121 }, 00:23:23.121 "method": "bdev_nvme_attach_controller" 00:23:23.121 } 00:23:23.121 EOF 00:23:23.121 )") 00:23:23.121 11:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:23:23.121 11:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:23:23.121 11:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:23:23.121 11:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:23.121 "params": { 00:23:23.121 "name": "Nvme0", 00:23:23.121 "trtype": "tcp", 00:23:23.121 "traddr": "10.0.0.2", 00:23:23.121 "adrfam": "ipv4", 00:23:23.121 "trsvcid": "4420", 00:23:23.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:23.121 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:23.121 "hdgst": false, 00:23:23.121 "ddgst": false 00:23:23.121 }, 00:23:23.121 "method": "bdev_nvme_attach_controller" 00:23:23.121 }' 00:23:23.121 [2024-06-10 11:31:48.158825] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:23:23.121 [2024-06-10 11:31:48.158890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3912331 ] 00:23:23.121 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.380 [2024-06-10 11:31:48.277791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.380 [2024-06-10 11:31:48.359144] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.640 Running I/O for 1 seconds... 00:23:24.577 00:23:24.577 Latency(us) 00:23:24.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.577 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.577 Verification LBA range: start 0x0 length 0x400 00:23:24.577 Nvme0n1 : 1.00 1211.59 75.72 0.00 0.00 51921.03 9384.76 46556.77 00:23:24.577 =================================================================================================================== 00:23:24.577 Total : 1211.59 75.72 0.00 0.00 51921.03 9384.76 46556.77 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.836 rmmod nvme_tcp 00:23:24.836 rmmod nvme_fabrics 00:23:24.836 rmmod nvme_keyring 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3911748 ']' 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3911748 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 3911748 ']' 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 3911748 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3911748 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3911748' 00:23:24.836 killing process with pid 3911748 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 3911748 00:23:24.836 11:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 3911748 00:23:25.095 [2024-06-10 11:31:50.101020] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:23:25.095 11:31:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:25.095 11:31:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:25.095 11:31:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:25.095 11:31:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:25.095 11:31:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:25.095 11:31:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.095 11:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.095 11:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.633 11:31:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:27.633 11:31:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:27.633 00:23:27.633 real 0m16.244s 00:23:27.633 user 0m24.471s 00:23:27.633 sys 0m8.229s 00:23:27.633 11:31:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:27.633 11:31:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:23:27.633 ************************************ 00:23:27.633 END TEST nvmf_host_management 00:23:27.633 ************************************ 00:23:27.633 11:31:52 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:23:27.633 11:31:52 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:27.633 11:31:52 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:27.633 11:31:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:27.633 ************************************ 00:23:27.633 START TEST nvmf_lvol 00:23:27.633 ************************************ 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:23:27.633 * Looking for test storage... 00:23:27.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:23:27.633 11:31:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:35.839 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:35.839 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:35.839 Found net devices under 0000:af:00.0: cvl_0_0 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.839 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:35.839 Found net devices under 0000:af:00.1: cvl_0_1 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.840 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.099 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.099 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:36.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:23:36.099 00:23:36.099 --- 10.0.0.2 ping statistics --- 00:23:36.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.099 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:23:36.099 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:23:36.099 00:23:36.099 --- 10.0.0.1 ping statistics --- 00:23:36.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.099 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:23:36.099 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.099 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:23:36.099 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:36.099 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.099 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:36.099 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:36.099 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.099 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:36.099 11:32:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:36.099 11:32:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:23:36.099 11:32:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:36.099 11:32:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:36.099 11:32:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:36.099 11:32:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3917134 00:23:36.099 11:32:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3917134 00:23:36.099 11:32:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:23:36.099 11:32:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 3917134 ']' 00:23:36.099 11:32:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.099 11:32:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:36.099 11:32:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.099 11:32:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:36.099 11:32:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:36.099 [2024-06-10 11:32:01.081912] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:23:36.099 [2024-06-10 11:32:01.081982] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.099 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.359 [2024-06-10 11:32:01.212333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:36.359 [2024-06-10 11:32:01.297504] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.359 [2024-06-10 11:32:01.297550] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.359 [2024-06-10 11:32:01.297564] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.359 [2024-06-10 11:32:01.297581] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.359 [2024-06-10 11:32:01.297592] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.359 [2024-06-10 11:32:01.297643] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.359 [2024-06-10 11:32:01.297740] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.359 [2024-06-10 11:32:01.297744] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.927 11:32:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:36.927 11:32:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:23:36.927 11:32:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.927 11:32:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:36.927 11:32:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:37.186 11:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.186 11:32:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:37.186 [2024-06-10 11:32:02.246717] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.186 11:32:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:37.445 11:32:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:23:37.445 11:32:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:37.704 11:32:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:23:37.704 11:32:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:23:37.963 11:32:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:23:38.223 11:32:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1e0ae191-8272-4b92-b375-f13129a4e621 00:23:38.223 11:32:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1e0ae191-8272-4b92-b375-f13129a4e621 lvol 20 00:23:38.482 11:32:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4813eb56-f77d-45b7-8839-977409fc954e 00:23:38.482 11:32:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:38.741 11:32:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4813eb56-f77d-45b7-8839-977409fc954e 00:23:38.741 11:32:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:39.000 [2024-06-10 11:32:04.016895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.000 11:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:39.259 11:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3917749 00:23:39.259 11:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:23:39.259 11:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:23:39.259 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.197 11:32:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4813eb56-f77d-45b7-8839-977409fc954e MY_SNAPSHOT 00:23:40.458 11:32:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f39ccc69-b6b4-404b-9e23-b72932b8a91c 00:23:40.458 11:32:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4813eb56-f77d-45b7-8839-977409fc954e 30 00:23:41.028 11:32:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f39ccc69-b6b4-404b-9e23-b72932b8a91c MY_CLONE 00:23:41.028 11:32:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8f7609d2-ceae-4680-be44-381cea06325a 00:23:41.028 11:32:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8f7609d2-ceae-4680-be44-381cea06325a 00:23:41.596 11:32:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3917749 00:23:49.718 Initializing NVMe Controllers 00:23:49.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:23:49.718 Controller IO queue size 128, less than required. 00:23:49.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:49.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:23:49.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:23:49.718 Initialization complete. Launching workers. 00:23:49.718 ======================================================== 00:23:49.718 Latency(us) 00:23:49.718 Device Information : IOPS MiB/s Average min max 00:23:49.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9869.10 38.55 12974.46 2173.24 79792.22 00:23:49.718 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9753.20 38.10 13126.16 3835.95 74845.80 00:23:49.718 ======================================================== 00:23:49.718 Total : 19622.30 76.65 13049.86 2173.24 79792.22 00:23:49.718 00:23:49.718 11:32:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:49.978 11:32:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4813eb56-f77d-45b7-8839-977409fc954e 00:23:50.237 11:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1e0ae191-8272-4b92-b375-f13129a4e621 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:50.497 rmmod nvme_tcp 00:23:50.497 rmmod nvme_fabrics 00:23:50.497 rmmod nvme_keyring 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3917134 ']' 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3917134 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 3917134 ']' 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 3917134 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3917134 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3917134' 00:23:50.497 killing process with pid 3917134 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 3917134 00:23:50.497 11:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 3917134 00:23:50.756 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.756 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:50.756 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:50.757 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.757 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.757 11:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.757 11:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.757 11:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.294 11:32:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:53.294 00:23:53.294 real 0m25.538s 00:23:53.294 user 1m5.314s 00:23:53.294 sys 0m11.548s 00:23:53.294 11:32:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:53.294 11:32:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:23:53.294 ************************************ 00:23:53.294 END TEST nvmf_lvol 00:23:53.294 ************************************ 00:23:53.294 11:32:17 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:23:53.294 11:32:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:53.294 11:32:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:53.294 11:32:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:53.294 ************************************ 00:23:53.294 START TEST nvmf_lvs_grow 00:23:53.294 ************************************ 00:23:53.294 11:32:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:23:53.294 * Looking for test storage... 00:23:53.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:23:53.294 11:32:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:01.415 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:01.415 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.415 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.416 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.416 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.416 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.416 11:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:01.416 Found net devices under 0000:af:00.0: cvl_0_0 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:01.416 Found net devices under 0000:af:00.1: cvl_0_1 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:24:01.416 00:24:01.416 --- 10.0.0.2 ping statistics --- 00:24:01.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.416 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:24:01.416 00:24:01.416 --- 10.0.0.1 ping statistics --- 00:24:01.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.416 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3924004 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3924004 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 3924004 ']' 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:01.416 11:32:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:01.416 [2024-06-10 11:32:26.402285] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:24:01.416 [2024-06-10 11:32:26.402346] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.416 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.676 [2024-06-10 11:32:26.530795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.676 [2024-06-10 11:32:26.615214] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.676 [2024-06-10 11:32:26.615259] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.676 [2024-06-10 11:32:26.615273] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.676 [2024-06-10 11:32:26.615285] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.676 [2024-06-10 11:32:26.615295] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.676 [2024-06-10 11:32:26.615328] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.245 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:02.245 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:24:02.245 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.245 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:02.245 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:02.245 11:32:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.245 11:32:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:02.504 [2024-06-10 11:32:27.543060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.504 11:32:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:24:02.504 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:24:02.504 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:02.504 11:32:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:02.504 ************************************ 00:24:02.504 START TEST lvs_grow_clean 00:24:02.504 ************************************ 00:24:02.504 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:24:02.504 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:24:02.504 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:24:02.504 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:24:02.504 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:24:02.504 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:24:02.504 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:24:02.505 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:24:02.505 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:24:02.505 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:02.764 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:24:02.764 11:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:24:03.024 11:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4c917c30-67dd-4314-a24a-71e642df4061 00:24:03.024 11:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c917c30-67dd-4314-a24a-71e642df4061 00:24:03.024 11:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:24:03.283 11:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:24:03.283 11:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:24:03.283 11:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4c917c30-67dd-4314-a24a-71e642df4061 lvol 150 00:24:03.542 11:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=cbbcf3c0-0af6-4d01-be6a-6d8ac66a5c8b 00:24:03.542 11:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:24:03.542 11:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:24:03.802 [2024-06-10 11:32:28.746933] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:24:03.802 [2024-06-10 11:32:28.746992] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:24:03.802 true 00:24:03.802 11:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c917c30-67dd-4314-a24a-71e642df4061 00:24:03.802 11:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:24:04.062 11:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:24:04.062 11:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:04.322 11:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cbbcf3c0-0af6-4d01-be6a-6d8ac66a5c8b 00:24:04.322 11:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:04.581 [2024-06-10 11:32:29.629961] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.581 11:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:04.841 11:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3924672 00:24:04.841 11:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:04.841 11:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3924672 /var/tmp/bdevperf.sock 00:24:04.841 11:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 3924672 ']' 00:24:04.841 11:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.841 11:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:04.841 11:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.841 11:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:04.841 11:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:24:04.841 11:32:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:24:04.841 [2024-06-10 11:32:29.922282] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:24:04.841 [2024-06-10 11:32:29.922343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924672 ] 00:24:05.100 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.100 [2024-06-10 11:32:30.033741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.100 [2024-06-10 11:32:30.121690] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.038 11:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:06.038 11:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:24:06.038 11:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:24:06.038 Nvme0n1 00:24:06.300 11:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:24:06.300 [ 00:24:06.300 { 00:24:06.300 "name": "Nvme0n1", 00:24:06.300 "aliases": [ 00:24:06.300 "cbbcf3c0-0af6-4d01-be6a-6d8ac66a5c8b" 00:24:06.300 ], 00:24:06.300 "product_name": "NVMe disk", 00:24:06.300 "block_size": 4096, 00:24:06.300 "num_blocks": 38912, 00:24:06.300 "uuid": "cbbcf3c0-0af6-4d01-be6a-6d8ac66a5c8b", 00:24:06.300 "assigned_rate_limits": { 00:24:06.300 "rw_ios_per_sec": 0, 00:24:06.300 "rw_mbytes_per_sec": 0, 00:24:06.300 "r_mbytes_per_sec": 0, 00:24:06.300 "w_mbytes_per_sec": 0 00:24:06.300 }, 00:24:06.300 "claimed": false, 00:24:06.300 "zoned": false, 00:24:06.300 "supported_io_types": { 00:24:06.300 "read": true, 00:24:06.300 "write": true, 00:24:06.300 "unmap": true, 00:24:06.300 "write_zeroes": true, 00:24:06.300 "flush": true, 00:24:06.300 "reset": true, 00:24:06.300 "compare": true, 00:24:06.300 "compare_and_write": true, 00:24:06.300 "abort": true, 00:24:06.300 "nvme_admin": true, 00:24:06.300 "nvme_io": true 00:24:06.300 }, 00:24:06.300 "memory_domains": [ 00:24:06.300 { 00:24:06.300 "dma_device_id": "system", 00:24:06.300 "dma_device_type": 1 00:24:06.300 } 00:24:06.300 ], 00:24:06.300 "driver_specific": { 00:24:06.300 "nvme": [ 00:24:06.300 { 00:24:06.300 "trid": { 00:24:06.300 "trtype": "TCP", 00:24:06.300 "adrfam": "IPv4", 00:24:06.300 "traddr": "10.0.0.2", 00:24:06.300 "trsvcid": "4420", 00:24:06.300 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:06.300 }, 00:24:06.300 "ctrlr_data": { 00:24:06.300 "cntlid": 1, 00:24:06.300 "vendor_id": "0x8086", 00:24:06.300 "model_number": "SPDK bdev Controller", 00:24:06.300 "serial_number": "SPDK0", 00:24:06.300 "firmware_revision": "24.09", 00:24:06.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.300 "oacs": { 00:24:06.300 "security": 0, 00:24:06.300 "format": 0, 00:24:06.300 "firmware": 0, 00:24:06.300 "ns_manage": 0 00:24:06.300 }, 00:24:06.300 "multi_ctrlr": true, 00:24:06.300 "ana_reporting": false 00:24:06.300 }, 00:24:06.300 "vs": { 00:24:06.300 "nvme_version": "1.3" 00:24:06.300 }, 00:24:06.300 "ns_data": { 00:24:06.300 "id": 1, 00:24:06.300 "can_share": true 00:24:06.300 } 00:24:06.300 } 00:24:06.300 ], 00:24:06.300 "mp_policy": "active_passive" 00:24:06.300 } 00:24:06.300 } 00:24:06.300 ] 00:24:06.300 11:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3924937 00:24:06.300 11:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:24:06.300 11:32:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:06.637 Running I/O for 10 seconds... 00:24:07.584 Latency(us) 00:24:07.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:07.584 Nvme0n1 : 1.00 16238.00 63.43 0.00 0.00 0.00 0.00 0.00 00:24:07.584 =================================================================================================================== 00:24:07.584 Total : 16238.00 63.43 0.00 0.00 0.00 0.00 0.00 00:24:07.584 00:24:08.521 11:32:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4c917c30-67dd-4314-a24a-71e642df4061 00:24:08.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:08.521 Nvme0n1 : 2.00 16335.00 63.81 0.00 0.00 0.00 0.00 0.00 00:24:08.521 =================================================================================================================== 00:24:08.521 Total : 16335.00 63.81 0.00 0.00 0.00 0.00 0.00 00:24:08.521 00:24:08.521 true 00:24:08.521 11:32:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c917c30-67dd-4314-a24a-71e642df4061 00:24:08.521 11:32:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:24:08.780 11:32:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:24:08.780 11:32:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:24:08.780 11:32:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3924937 00:24:09.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:09.715 Nvme0n1 : 3.00 16378.00 63.98 0.00 0.00 0.00 0.00 0.00 00:24:09.715 =================================================================================================================== 00:24:09.715 Total : 16378.00 63.98 0.00 0.00 0.00 0.00 0.00 00:24:09.715 00:24:10.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:10.651 Nvme0n1 : 4.00 16417.50 64.13 0.00 0.00 0.00 0.00 0.00 00:24:10.651 =================================================================================================================== 00:24:10.651 Total : 16417.50 64.13 0.00 0.00 0.00 0.00 0.00 00:24:10.651 00:24:11.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:11.600 Nvme0n1 : 5.00 16444.40 64.24 0.00 0.00 0.00 0.00 0.00 00:24:11.600 =================================================================================================================== 00:24:11.600 Total : 16444.40 64.24 0.00 0.00 0.00 0.00 0.00 00:24:11.600 00:24:12.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:12.537 Nvme0n1 : 6.00 16469.00 64.33 0.00 0.00 0.00 0.00 0.00 00:24:12.537 =================================================================================================================== 00:24:12.537 Total : 16469.00 64.33 0.00 0.00 0.00 0.00 0.00 00:24:12.537 00:24:13.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:13.474 Nvme0n1 : 7.00 16488.86 64.41 0.00 0.00 0.00 0.00 0.00 00:24:13.475 =================================================================================================================== 00:24:13.475 Total : 16488.86 64.41 0.00 0.00 0.00 0.00 0.00 00:24:13.475 00:24:14.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:14.411 Nvme0n1 : 8.00 16504.75 64.47 0.00 0.00 0.00 0.00 0.00 00:24:14.411 =================================================================================================================== 00:24:14.411 Total : 16504.75 64.47 0.00 0.00 0.00 0.00 0.00 00:24:14.411 00:24:15.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:15.790 Nvme0n1 : 9.00 16517.11 64.52 0.00 0.00 0.00 0.00 0.00 00:24:15.790 =================================================================================================================== 00:24:15.790 Total : 16517.11 64.52 0.00 0.00 0.00 0.00 0.00 00:24:15.790 00:24:16.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:16.728 Nvme0n1 : 10.00 16529.40 64.57 0.00 0.00 0.00 0.00 0.00 00:24:16.728 =================================================================================================================== 00:24:16.728 Total : 16529.40 64.57 0.00 0.00 0.00 0.00 0.00 00:24:16.728 00:24:16.728 00:24:16.728 Latency(us) 00:24:16.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:16.728 Nvme0n1 : 10.01 16529.80 64.57 0.00 0.00 7736.97 5767.17 14365.49 00:24:16.728 =================================================================================================================== 00:24:16.728 Total : 16529.80 64.57 0.00 0.00 7736.97 5767.17 14365.49 00:24:16.728 0 00:24:16.728 11:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3924672 00:24:16.728 11:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 3924672 ']' 00:24:16.728 11:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 3924672 00:24:16.728 11:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:24:16.728 11:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:16.728 11:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3924672 00:24:16.728 11:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:16.728 11:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:16.728 11:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3924672' 00:24:16.728 killing process with pid 3924672 00:24:16.728 11:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 3924672 00:24:16.728 Received shutdown signal, test time was about 10.000000 seconds 00:24:16.728 00:24:16.728 Latency(us) 00:24:16.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.728 =================================================================================================================== 00:24:16.728 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.728 11:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 3924672 00:24:16.728 11:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:16.987 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:17.245 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c917c30-67dd-4314-a24a-71e642df4061 00:24:17.245 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:24:17.504 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:24:17.504 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:24:17.504 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:17.764 [2024-06-10 11:32:42.689276] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:24:17.765 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c917c30-67dd-4314-a24a-71e642df4061 00:24:17.765 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:24:17.765 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c917c30-67dd-4314-a24a-71e642df4061 00:24:17.765 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:17.765 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:17.765 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:17.765 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:17.765 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:17.765 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:17.765 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:17.765 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:24:17.765 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c917c30-67dd-4314-a24a-71e642df4061 00:24:18.024 request: 00:24:18.024 { 00:24:18.024 "uuid": "4c917c30-67dd-4314-a24a-71e642df4061", 00:24:18.024 "method": "bdev_lvol_get_lvstores", 00:24:18.024 "req_id": 1 00:24:18.024 } 00:24:18.024 Got JSON-RPC error response 00:24:18.024 response: 00:24:18.024 { 00:24:18.024 "code": -19, 00:24:18.024 "message": "No such device" 00:24:18.024 } 00:24:18.024 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:24:18.024 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:18.024 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:18.024 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:18.024 11:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:18.283 aio_bdev 00:24:18.283 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cbbcf3c0-0af6-4d01-be6a-6d8ac66a5c8b 00:24:18.283 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=cbbcf3c0-0af6-4d01-be6a-6d8ac66a5c8b 00:24:18.283 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:24:18.284 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:24:18.284 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:24:18.284 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:24:18.284 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:18.543 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cbbcf3c0-0af6-4d01-be6a-6d8ac66a5c8b -t 2000 00:24:18.543 [ 00:24:18.543 { 00:24:18.543 "name": "cbbcf3c0-0af6-4d01-be6a-6d8ac66a5c8b", 00:24:18.543 "aliases": [ 00:24:18.543 "lvs/lvol" 00:24:18.543 ], 00:24:18.543 "product_name": "Logical Volume", 00:24:18.543 "block_size": 4096, 00:24:18.543 "num_blocks": 38912, 00:24:18.543 "uuid": "cbbcf3c0-0af6-4d01-be6a-6d8ac66a5c8b", 00:24:18.543 "assigned_rate_limits": { 00:24:18.543 "rw_ios_per_sec": 0, 00:24:18.543 "rw_mbytes_per_sec": 0, 00:24:18.543 "r_mbytes_per_sec": 0, 00:24:18.543 "w_mbytes_per_sec": 0 00:24:18.543 }, 00:24:18.543 "claimed": false, 00:24:18.543 "zoned": false, 00:24:18.543 "supported_io_types": { 00:24:18.543 "read": true, 00:24:18.543 "write": true, 00:24:18.543 "unmap": true, 00:24:18.543 "write_zeroes": true, 00:24:18.543 "flush": false, 00:24:18.543 "reset": true, 00:24:18.543 "compare": false, 00:24:18.543 "compare_and_write": false, 00:24:18.543 "abort": false, 00:24:18.543 "nvme_admin": false, 00:24:18.543 "nvme_io": false 00:24:18.543 }, 00:24:18.543 "driver_specific": { 00:24:18.543 "lvol": { 00:24:18.543 "lvol_store_uuid": "4c917c30-67dd-4314-a24a-71e642df4061", 00:24:18.543 "base_bdev": "aio_bdev", 00:24:18.543 "thin_provision": false, 00:24:18.543 "num_allocated_clusters": 38, 00:24:18.543 "snapshot": false, 00:24:18.543 "clone": false, 00:24:18.543 "esnap_clone": false 00:24:18.543 } 00:24:18.543 } 00:24:18.543 } 00:24:18.543 ] 00:24:18.543 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:24:18.543 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c917c30-67dd-4314-a24a-71e642df4061 00:24:18.543 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:24:18.802 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:24:18.802 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4c917c30-67dd-4314-a24a-71e642df4061 00:24:18.802 11:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:24:19.061 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:24:19.061 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cbbcf3c0-0af6-4d01-be6a-6d8ac66a5c8b 00:24:19.321 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4c917c30-67dd-4314-a24a-71e642df4061 00:24:19.580 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:24:19.839 00:24:19.839 real 0m17.220s 00:24:19.839 user 0m16.332s 00:24:19.839 sys 0m2.315s 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:24:19.839 ************************************ 00:24:19.839 END TEST lvs_grow_clean 00:24:19.839 ************************************ 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:19.839 ************************************ 00:24:19.839 START TEST lvs_grow_dirty 00:24:19.839 ************************************ 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:24:19.839 11:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:20.098 11:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:24:20.098 11:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:24:20.357 11:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:20.357 11:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:20.357 11:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:24:20.616 11:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:24:20.616 11:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:24:20.616 11:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c lvol 150 00:24:20.875 11:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fa84d500-8276-4420-8b8e-fad5ac6cdbb7 00:24:20.875 11:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:24:20.875 11:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:24:20.875 [2024-06-10 11:32:45.963737] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:24:20.875 [2024-06-10 11:32:45.963794] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:24:20.875 true 00:24:21.134 11:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:21.134 11:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:24:21.134 11:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:24:21.134 11:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:21.393 11:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fa84d500-8276-4420-8b8e-fad5ac6cdbb7 00:24:21.652 11:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:21.911 [2024-06-10 11:32:46.862482] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.912 11:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:22.171 11:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3927657 00:24:22.171 11:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:22.171 11:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3927657 /var/tmp/bdevperf.sock 00:24:22.171 11:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 3927657 ']' 00:24:22.171 11:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.171 11:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:22.171 11:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.171 11:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:22.171 11:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:22.171 11:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:24:22.171 [2024-06-10 11:32:47.154260] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:24:22.171 [2024-06-10 11:32:47.154325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927657 ] 00:24:22.171 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.171 [2024-06-10 11:32:47.264714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.431 [2024-06-10 11:32:47.348915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.000 11:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:23.000 11:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:24:23.000 11:32:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:24:23.569 Nvme0n1 00:24:23.569 11:32:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:24:23.569 [ 00:24:23.569 { 00:24:23.569 "name": "Nvme0n1", 00:24:23.569 "aliases": [ 00:24:23.569 "fa84d500-8276-4420-8b8e-fad5ac6cdbb7" 00:24:23.569 ], 00:24:23.569 "product_name": "NVMe disk", 00:24:23.569 "block_size": 4096, 00:24:23.569 "num_blocks": 38912, 00:24:23.569 "uuid": "fa84d500-8276-4420-8b8e-fad5ac6cdbb7", 00:24:23.569 "assigned_rate_limits": { 00:24:23.569 "rw_ios_per_sec": 0, 00:24:23.569 "rw_mbytes_per_sec": 0, 00:24:23.569 "r_mbytes_per_sec": 0, 00:24:23.569 "w_mbytes_per_sec": 0 00:24:23.569 }, 00:24:23.569 "claimed": false, 00:24:23.569 "zoned": false, 00:24:23.569 "supported_io_types": { 00:24:23.569 "read": true, 00:24:23.569 "write": true, 00:24:23.569 "unmap": true, 00:24:23.569 "write_zeroes": true, 00:24:23.569 "flush": true, 00:24:23.569 "reset": true, 00:24:23.569 "compare": true, 00:24:23.569 "compare_and_write": true, 00:24:23.569 "abort": true, 00:24:23.569 "nvme_admin": true, 00:24:23.569 "nvme_io": true 00:24:23.569 }, 00:24:23.569 "memory_domains": [ 00:24:23.569 { 00:24:23.569 "dma_device_id": "system", 00:24:23.569 "dma_device_type": 1 00:24:23.569 } 00:24:23.569 ], 00:24:23.569 "driver_specific": { 00:24:23.569 "nvme": [ 00:24:23.569 { 00:24:23.569 "trid": { 00:24:23.569 "trtype": "TCP", 00:24:23.569 "adrfam": "IPv4", 00:24:23.569 "traddr": "10.0.0.2", 00:24:23.569 "trsvcid": "4420", 00:24:23.569 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:23.569 }, 00:24:23.569 "ctrlr_data": { 00:24:23.569 "cntlid": 1, 00:24:23.569 "vendor_id": "0x8086", 00:24:23.569 "model_number": "SPDK bdev Controller", 00:24:23.569 "serial_number": "SPDK0", 00:24:23.569 "firmware_revision": "24.09", 00:24:23.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:23.569 "oacs": { 00:24:23.569 "security": 0, 00:24:23.569 "format": 0, 00:24:23.569 "firmware": 0, 00:24:23.569 "ns_manage": 0 00:24:23.569 }, 00:24:23.569 "multi_ctrlr": true, 00:24:23.569 "ana_reporting": false 00:24:23.569 }, 00:24:23.569 "vs": { 00:24:23.569 "nvme_version": "1.3" 00:24:23.569 }, 00:24:23.569 "ns_data": { 00:24:23.569 "id": 1, 00:24:23.569 "can_share": true 00:24:23.569 } 00:24:23.569 } 00:24:23.569 ], 00:24:23.569 "mp_policy": "active_passive" 00:24:23.569 } 00:24:23.569 } 00:24:23.569 ] 00:24:23.569 11:32:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3927916 00:24:23.569 11:32:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:24:23.569 11:32:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:23.828 Running I/O for 10 seconds... 00:24:24.804 Latency(us) 00:24:24.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:24.804 Nvme0n1 : 1.00 16924.00 66.11 0.00 0.00 0.00 0.00 0.00 00:24:24.804 =================================================================================================================== 00:24:24.804 Total : 16924.00 66.11 0.00 0.00 0.00 0.00 0.00 00:24:24.804 00:24:25.751 11:32:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:25.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:25.751 Nvme0n1 : 2.00 17005.00 66.43 0.00 0.00 0.00 0.00 0.00 00:24:25.751 =================================================================================================================== 00:24:25.751 Total : 17005.00 66.43 0.00 0.00 0.00 0.00 0.00 00:24:25.751 00:24:26.009 true 00:24:26.009 11:32:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:26.009 11:32:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:24:26.268 11:32:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:24:26.268 11:32:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:24:26.268 11:32:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3927916 00:24:26.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:26.835 Nvme0n1 : 3.00 17032.33 66.53 0.00 0.00 0.00 0.00 0.00 00:24:26.835 =================================================================================================================== 00:24:26.835 Total : 17032.33 66.53 0.00 0.00 0.00 0.00 0.00 00:24:26.835 00:24:27.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:27.771 Nvme0n1 : 4.00 17078.00 66.71 0.00 0.00 0.00 0.00 0.00 00:24:27.771 =================================================================================================================== 00:24:27.771 Total : 17078.00 66.71 0.00 0.00 0.00 0.00 0.00 00:24:27.771 00:24:28.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:28.706 Nvme0n1 : 5.00 17105.80 66.82 0.00 0.00 0.00 0.00 0.00 00:24:28.706 =================================================================================================================== 00:24:28.706 Total : 17105.80 66.82 0.00 0.00 0.00 0.00 0.00 00:24:28.706 00:24:30.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:30.082 Nvme0n1 : 6.00 17124.00 66.89 0.00 0.00 0.00 0.00 0.00 00:24:30.082 =================================================================================================================== 00:24:30.082 Total : 17124.00 66.89 0.00 0.00 0.00 0.00 0.00 00:24:30.082 00:24:31.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:31.090 Nvme0n1 : 7.00 17146.14 66.98 0.00 0.00 0.00 0.00 0.00 00:24:31.090 =================================================================================================================== 00:24:31.090 Total : 17146.14 66.98 0.00 0.00 0.00 0.00 0.00 00:24:31.090 00:24:32.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:32.026 Nvme0n1 : 8.00 17157.00 67.02 0.00 0.00 0.00 0.00 0.00 00:24:32.026 =================================================================================================================== 00:24:32.026 Total : 17157.00 67.02 0.00 0.00 0.00 0.00 0.00 00:24:32.026 00:24:32.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:32.963 Nvme0n1 : 9.00 17168.67 67.07 0.00 0.00 0.00 0.00 0.00 00:24:32.963 =================================================================================================================== 00:24:32.963 Total : 17168.67 67.07 0.00 0.00 0.00 0.00 0.00 00:24:32.963 00:24:33.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:33.900 Nvme0n1 : 10.00 17179.70 67.11 0.00 0.00 0.00 0.00 0.00 00:24:33.900 =================================================================================================================== 00:24:33.900 Total : 17179.70 67.11 0.00 0.00 0.00 0.00 0.00 00:24:33.900 00:24:33.900 00:24:33.900 Latency(us) 00:24:33.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:33.900 Nvme0n1 : 10.00 17184.83 67.13 0.00 0.00 7444.92 1992.29 12949.91 00:24:33.900 =================================================================================================================== 00:24:33.900 Total : 17184.83 67.13 0.00 0.00 7444.92 1992.29 12949.91 00:24:33.900 0 00:24:33.900 11:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3927657 00:24:33.900 11:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 3927657 ']' 00:24:33.900 11:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 3927657 00:24:33.900 11:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:24:33.900 11:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:33.900 11:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3927657 00:24:33.900 11:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:33.900 11:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:33.900 11:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3927657' 00:24:33.900 killing process with pid 3927657 00:24:33.900 11:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 3927657 00:24:33.900 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.900 00:24:33.900 Latency(us) 00:24:33.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.900 =================================================================================================================== 00:24:33.900 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.901 11:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 3927657 00:24:34.159 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:34.418 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:34.677 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:34.677 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3924004 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3924004 00:24:34.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3924004 Killed "${NVMF_APP[@]}" "$@" 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3929795 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3929795 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 3929795 ']' 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:34.937 11:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:34.937 [2024-06-10 11:32:59.910705] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:24:34.937 [2024-06-10 11:32:59.910771] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.937 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.196 [2024-06-10 11:33:00.041864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.196 [2024-06-10 11:33:00.131619] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.196 [2024-06-10 11:33:00.131659] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.196 [2024-06-10 11:33:00.131672] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.196 [2024-06-10 11:33:00.131684] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.196 [2024-06-10 11:33:00.131694] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.196 [2024-06-10 11:33:00.131726] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.765 11:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:35.765 11:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:24:35.765 11:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:35.765 11:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:35.765 11:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:35.765 11:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.024 11:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:36.024 [2024-06-10 11:33:01.083353] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:36.024 [2024-06-10 11:33:01.083454] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:36.024 [2024-06-10 11:33:01.083490] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:36.024 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:24:36.024 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fa84d500-8276-4420-8b8e-fad5ac6cdbb7 00:24:36.024 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=fa84d500-8276-4420-8b8e-fad5ac6cdbb7 00:24:36.024 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:24:36.024 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:24:36.024 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:24:36.024 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:24:36.024 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:36.283 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fa84d500-8276-4420-8b8e-fad5ac6cdbb7 -t 2000 00:24:36.542 [ 00:24:36.542 { 00:24:36.542 "name": "fa84d500-8276-4420-8b8e-fad5ac6cdbb7", 00:24:36.542 "aliases": [ 00:24:36.542 "lvs/lvol" 00:24:36.542 ], 00:24:36.542 "product_name": "Logical Volume", 00:24:36.542 "block_size": 4096, 00:24:36.542 "num_blocks": 38912, 00:24:36.542 "uuid": "fa84d500-8276-4420-8b8e-fad5ac6cdbb7", 00:24:36.542 "assigned_rate_limits": { 00:24:36.542 "rw_ios_per_sec": 0, 00:24:36.542 "rw_mbytes_per_sec": 0, 00:24:36.542 "r_mbytes_per_sec": 0, 00:24:36.542 "w_mbytes_per_sec": 0 00:24:36.542 }, 00:24:36.542 "claimed": false, 00:24:36.542 "zoned": false, 00:24:36.542 "supported_io_types": { 00:24:36.542 "read": true, 00:24:36.542 "write": true, 00:24:36.542 "unmap": true, 00:24:36.542 "write_zeroes": true, 00:24:36.542 "flush": false, 00:24:36.542 "reset": true, 00:24:36.542 "compare": false, 00:24:36.542 "compare_and_write": false, 00:24:36.542 "abort": false, 00:24:36.542 "nvme_admin": false, 00:24:36.542 "nvme_io": false 00:24:36.542 }, 00:24:36.542 "driver_specific": { 00:24:36.542 "lvol": { 00:24:36.542 "lvol_store_uuid": "504af8f6-006d-4304-a6f2-c8fc2b1a3d8c", 00:24:36.542 "base_bdev": "aio_bdev", 00:24:36.542 "thin_provision": false, 00:24:36.542 "num_allocated_clusters": 38, 00:24:36.542 "snapshot": false, 00:24:36.542 "clone": false, 00:24:36.542 "esnap_clone": false 00:24:36.542 } 00:24:36.542 } 00:24:36.542 } 00:24:36.542 ] 00:24:36.542 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:24:36.542 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:36.542 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:24:36.801 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:24:36.801 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:36.801 11:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:24:37.059 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:24:37.059 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:37.318 [2024-06-10 11:33:02.227915] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:24:37.318 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:37.318 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:24:37.318 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:37.318 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:37.318 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:37.318 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:37.318 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:37.318 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:37.318 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:37.318 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:37.318 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:24:37.318 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:37.577 request: 00:24:37.577 { 00:24:37.577 "uuid": "504af8f6-006d-4304-a6f2-c8fc2b1a3d8c", 00:24:37.577 "method": "bdev_lvol_get_lvstores", 00:24:37.577 "req_id": 1 00:24:37.577 } 00:24:37.577 Got JSON-RPC error response 00:24:37.577 response: 00:24:37.577 { 00:24:37.577 "code": -19, 00:24:37.577 "message": "No such device" 00:24:37.577 } 00:24:37.577 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:24:37.577 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:37.577 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:37.577 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:37.577 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:37.835 aio_bdev 00:24:37.835 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fa84d500-8276-4420-8b8e-fad5ac6cdbb7 00:24:37.835 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=fa84d500-8276-4420-8b8e-fad5ac6cdbb7 00:24:37.835 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:24:37.835 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:24:37.835 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:24:37.835 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:24:37.835 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:38.094 11:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fa84d500-8276-4420-8b8e-fad5ac6cdbb7 -t 2000 00:24:38.094 [ 00:24:38.094 { 00:24:38.094 "name": "fa84d500-8276-4420-8b8e-fad5ac6cdbb7", 00:24:38.094 "aliases": [ 00:24:38.094 "lvs/lvol" 00:24:38.094 ], 00:24:38.094 "product_name": "Logical Volume", 00:24:38.094 "block_size": 4096, 00:24:38.094 "num_blocks": 38912, 00:24:38.094 "uuid": "fa84d500-8276-4420-8b8e-fad5ac6cdbb7", 00:24:38.094 "assigned_rate_limits": { 00:24:38.094 "rw_ios_per_sec": 0, 00:24:38.094 "rw_mbytes_per_sec": 0, 00:24:38.094 "r_mbytes_per_sec": 0, 00:24:38.094 "w_mbytes_per_sec": 0 00:24:38.094 }, 00:24:38.094 "claimed": false, 00:24:38.094 "zoned": false, 00:24:38.094 "supported_io_types": { 00:24:38.094 "read": true, 00:24:38.094 "write": true, 00:24:38.094 "unmap": true, 00:24:38.094 "write_zeroes": true, 00:24:38.094 "flush": false, 00:24:38.094 "reset": true, 00:24:38.094 "compare": false, 00:24:38.094 "compare_and_write": false, 00:24:38.094 "abort": false, 00:24:38.094 "nvme_admin": false, 00:24:38.094 "nvme_io": false 00:24:38.094 }, 00:24:38.094 "driver_specific": { 00:24:38.094 "lvol": { 00:24:38.094 "lvol_store_uuid": "504af8f6-006d-4304-a6f2-c8fc2b1a3d8c", 00:24:38.094 "base_bdev": "aio_bdev", 00:24:38.094 "thin_provision": false, 00:24:38.094 "num_allocated_clusters": 38, 00:24:38.094 "snapshot": false, 00:24:38.094 "clone": false, 00:24:38.094 "esnap_clone": false 00:24:38.094 } 00:24:38.094 } 00:24:38.094 } 00:24:38.094 ] 00:24:38.094 11:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:24:38.094 11:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:38.094 11:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:24:38.353 11:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:24:38.353 11:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:38.353 11:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:24:38.612 11:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:24:38.612 11:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fa84d500-8276-4420-8b8e-fad5ac6cdbb7 00:24:38.871 11:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 504af8f6-006d-4304-a6f2-c8fc2b1a3d8c 00:24:39.130 11:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:39.389 11:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:24:39.389 00:24:39.389 real 0m19.484s 00:24:39.389 user 0m48.445s 00:24:39.389 sys 0m5.008s 00:24:39.389 11:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:39.389 11:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:39.389 ************************************ 00:24:39.389 END TEST lvs_grow_dirty 00:24:39.389 ************************************ 00:24:39.389 11:33:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:24:39.389 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:24:39.389 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:24:39.389 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:24:39.389 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:39.390 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:24:39.390 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:24:39.390 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:24:39.390 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:39.390 nvmf_trace.0 00:24:39.390 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:24:39.390 11:33:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:24:39.390 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:39.390 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:39.649 rmmod nvme_tcp 00:24:39.649 rmmod nvme_fabrics 00:24:39.649 rmmod nvme_keyring 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3929795 ']' 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3929795 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 3929795 ']' 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 3929795 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3929795 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3929795' 00:24:39.649 killing process with pid 3929795 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 3929795 00:24:39.649 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 3929795 00:24:39.909 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:39.909 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:39.909 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:39.909 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:39.909 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:39.909 11:33:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.909 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.909 11:33:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.816 11:33:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:41.816 00:24:41.816 real 0m48.989s 00:24:41.816 user 1m12.351s 00:24:41.816 sys 0m14.208s 00:24:41.816 11:33:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:41.816 11:33:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:41.816 ************************************ 00:24:41.816 END TEST nvmf_lvs_grow 00:24:41.816 ************************************ 00:24:42.076 11:33:06 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:24:42.076 11:33:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:42.076 11:33:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:42.076 11:33:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:42.076 ************************************ 00:24:42.076 START TEST nvmf_bdev_io_wait 00:24:42.076 ************************************ 00:24:42.076 11:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:24:42.076 * Looking for test storage... 00:24:42.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.076 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:42.077 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:42.077 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:42.077 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.077 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.077 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.077 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:42.077 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:42.077 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:24:42.077 11:33:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:52.063 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.063 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:52.064 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:52.064 Found net devices under 0000:af:00.0: cvl_0_0 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:52.064 Found net devices under 0000:af:00.1: cvl_0_1 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:52.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:24:52.064 00:24:52.064 --- 10.0.0.2 ping statistics --- 00:24:52.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.064 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:24:52.064 00:24:52.064 --- 10.0.0.1 ping statistics --- 00:24:52.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.064 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3935649 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3935649 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 3935649 ']' 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:52.064 11:33:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:52.064 [2024-06-10 11:33:16.031396] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:24:52.064 [2024-06-10 11:33:16.031438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.064 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.064 [2024-06-10 11:33:16.141122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:52.064 [2024-06-10 11:33:16.229224] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.064 [2024-06-10 11:33:16.229268] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.064 [2024-06-10 11:33:16.229282] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.064 [2024-06-10 11:33:16.229294] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.064 [2024-06-10 11:33:16.229304] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.064 [2024-06-10 11:33:16.229363] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.064 [2024-06-10 11:33:16.229456] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.064 [2024-06-10 11:33:16.229570] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.064 [2024-06-10 11:33:16.229571] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:24:52.064 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:52.064 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:24:52.064 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:52.064 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:52.064 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:52.064 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.064 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:24:52.064 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.064 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:52.064 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.065 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:24:52.065 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.065 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:52.065 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.065 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:52.065 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.065 11:33:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:52.065 [2024-06-10 11:33:17.006848] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:52.065 Malloc0 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:52.065 [2024-06-10 11:33:17.068744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3935866 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3935870 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:52.065 { 00:24:52.065 "params": { 00:24:52.065 "name": "Nvme$subsystem", 00:24:52.065 "trtype": "$TEST_TRANSPORT", 00:24:52.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.065 "adrfam": "ipv4", 00:24:52.065 "trsvcid": "$NVMF_PORT", 00:24:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.065 "hdgst": ${hdgst:-false}, 00:24:52.065 "ddgst": ${ddgst:-false} 00:24:52.065 }, 00:24:52.065 "method": "bdev_nvme_attach_controller" 00:24:52.065 } 00:24:52.065 EOF 00:24:52.065 )") 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3935872 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:52.065 { 00:24:52.065 "params": { 00:24:52.065 "name": "Nvme$subsystem", 00:24:52.065 "trtype": "$TEST_TRANSPORT", 00:24:52.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.065 "adrfam": "ipv4", 00:24:52.065 "trsvcid": "$NVMF_PORT", 00:24:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.065 "hdgst": ${hdgst:-false}, 00:24:52.065 "ddgst": ${ddgst:-false} 00:24:52.065 }, 00:24:52.065 "method": "bdev_nvme_attach_controller" 00:24:52.065 } 00:24:52.065 EOF 00:24:52.065 )") 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3935876 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:52.065 { 00:24:52.065 "params": { 00:24:52.065 "name": "Nvme$subsystem", 00:24:52.065 "trtype": "$TEST_TRANSPORT", 00:24:52.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.065 "adrfam": "ipv4", 00:24:52.065 "trsvcid": "$NVMF_PORT", 00:24:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.065 "hdgst": ${hdgst:-false}, 00:24:52.065 "ddgst": ${ddgst:-false} 00:24:52.065 }, 00:24:52.065 "method": "bdev_nvme_attach_controller" 00:24:52.065 } 00:24:52.065 EOF 00:24:52.065 )") 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:52.065 { 00:24:52.065 "params": { 00:24:52.065 "name": "Nvme$subsystem", 00:24:52.065 "trtype": "$TEST_TRANSPORT", 00:24:52.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:52.065 "adrfam": "ipv4", 00:24:52.065 "trsvcid": "$NVMF_PORT", 00:24:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:52.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:52.065 "hdgst": ${hdgst:-false}, 00:24:52.065 "ddgst": ${ddgst:-false} 00:24:52.065 }, 00:24:52.065 "method": "bdev_nvme_attach_controller" 00:24:52.065 } 00:24:52.065 EOF 00:24:52.065 )") 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3935866 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:24:52.065 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:52.065 "params": { 00:24:52.065 "name": "Nvme1", 00:24:52.065 "trtype": "tcp", 00:24:52.065 "traddr": "10.0.0.2", 00:24:52.065 "adrfam": "ipv4", 00:24:52.065 "trsvcid": "4420", 00:24:52.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:52.065 "hdgst": false, 00:24:52.065 "ddgst": false 00:24:52.065 }, 00:24:52.065 "method": "bdev_nvme_attach_controller" 00:24:52.065 }' 00:24:52.066 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:24:52.066 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:24:52.066 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:52.066 "params": { 00:24:52.066 "name": "Nvme1", 00:24:52.066 "trtype": "tcp", 00:24:52.066 "traddr": "10.0.0.2", 00:24:52.066 "adrfam": "ipv4", 00:24:52.066 "trsvcid": "4420", 00:24:52.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:52.066 "hdgst": false, 00:24:52.066 "ddgst": false 00:24:52.066 }, 00:24:52.066 "method": "bdev_nvme_attach_controller" 00:24:52.066 }' 00:24:52.066 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:24:52.066 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:52.066 "params": { 00:24:52.066 "name": "Nvme1", 00:24:52.066 "trtype": "tcp", 00:24:52.066 "traddr": "10.0.0.2", 00:24:52.066 "adrfam": "ipv4", 00:24:52.066 "trsvcid": "4420", 00:24:52.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:52.066 "hdgst": false, 00:24:52.066 "ddgst": false 00:24:52.066 }, 00:24:52.066 "method": "bdev_nvme_attach_controller" 00:24:52.066 }' 00:24:52.066 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:24:52.066 11:33:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:52.066 "params": { 00:24:52.066 "name": "Nvme1", 00:24:52.066 "trtype": "tcp", 00:24:52.066 "traddr": "10.0.0.2", 00:24:52.066 "adrfam": "ipv4", 00:24:52.066 "trsvcid": "4420", 00:24:52.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:52.066 "hdgst": false, 00:24:52.066 "ddgst": false 00:24:52.066 }, 00:24:52.066 "method": "bdev_nvme_attach_controller" 00:24:52.066 }' 00:24:52.066 [2024-06-10 11:33:17.125151] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:24:52.066 [2024-06-10 11:33:17.125215] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:52.066 [2024-06-10 11:33:17.126735] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:24:52.066 [2024-06-10 11:33:17.126735] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:24:52.066 [2024-06-10 11:33:17.126800] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-06-10 11:33:17.126801] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:24:52.066 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:24:52.066 [2024-06-10 11:33:17.129987] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:24:52.066 [2024-06-10 11:33:17.130043] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:24:52.325 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.325 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.325 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.325 [2024-06-10 11:33:17.364512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.325 [2024-06-10 11:33:17.425631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.325 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.584 [2024-06-10 11:33:17.467905] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:24:52.584 [2024-06-10 11:33:17.487002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.584 [2024-06-10 11:33:17.512199] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:24:52.584 [2024-06-10 11:33:17.570053] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:24:52.584 [2024-06-10 11:33:17.586538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.584 Running I/O for 1 seconds... 00:24:52.584 [2024-06-10 11:33:17.679110] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:24:52.843 Running I/O for 1 seconds... 00:24:52.843 Running I/O for 1 seconds... 00:24:52.843 Running I/O for 1 seconds... 00:24:53.780 00:24:53.780 Latency(us) 00:24:53.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.780 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:24:53.780 Nvme1n1 : 1.01 10354.52 40.45 0.00 0.00 12311.17 6868.17 18664.65 00:24:53.780 =================================================================================================================== 00:24:53.780 Total : 10354.52 40.45 0.00 0.00 12311.17 6868.17 18664.65 00:24:53.780 00:24:53.780 Latency(us) 00:24:53.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.780 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:24:53.780 Nvme1n1 : 1.00 183117.89 715.30 0.00 0.00 696.26 288.36 858.52 00:24:53.780 =================================================================================================================== 00:24:53.780 Total : 183117.89 715.30 0.00 0.00 696.26 288.36 858.52 00:24:53.780 11:33:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3935870 00:24:53.780 00:24:53.780 Latency(us) 00:24:53.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.780 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:24:53.780 Nvme1n1 : 1.01 10248.41 40.03 0.00 0.00 12439.61 7444.89 18979.23 00:24:53.780 =================================================================================================================== 00:24:53.780 Total : 10248.41 40.03 0.00 0.00 12439.61 7444.89 18979.23 00:24:54.039 00:24:54.039 Latency(us) 00:24:54.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.039 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:24:54.039 Nvme1n1 : 1.01 9142.77 35.71 0.00 0.00 13954.16 6055.53 25794.97 00:24:54.039 =================================================================================================================== 00:24:54.039 Total : 9142.77 35.71 0.00 0.00 13954.16 6055.53 25794.97 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3935872 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3935876 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:54.299 rmmod nvme_tcp 00:24:54.299 rmmod nvme_fabrics 00:24:54.299 rmmod nvme_keyring 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3935649 ']' 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3935649 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 3935649 ']' 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 3935649 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3935649 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3935649' 00:24:54.299 killing process with pid 3935649 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 3935649 00:24:54.299 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 3935649 00:24:54.559 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:54.559 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:54.559 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:54.559 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:54.559 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:54.559 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.559 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.559 11:33:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.096 11:33:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:57.096 00:24:57.096 real 0m14.617s 00:24:57.096 user 0m21.104s 00:24:57.096 sys 0m8.970s 00:24:57.096 11:33:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:57.096 11:33:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:24:57.096 ************************************ 00:24:57.096 END TEST nvmf_bdev_io_wait 00:24:57.096 ************************************ 00:24:57.096 11:33:21 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:24:57.096 11:33:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:57.096 11:33:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:57.096 11:33:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:57.096 ************************************ 00:24:57.096 START TEST nvmf_queue_depth 00:24:57.096 ************************************ 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:24:57.096 * Looking for test storage... 00:24:57.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:24:57.096 11:33:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:05.292 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:05.292 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:05.292 Found net devices under 0000:af:00.0: cvl_0_0 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:05.292 Found net devices under 0000:af:00.1: cvl_0_1 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:05.292 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:05.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:25:05.293 00:25:05.293 --- 10.0.0.2 ping statistics --- 00:25:05.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.293 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:25:05.293 00:25:05.293 --- 10.0.0.1 ping statistics --- 00:25:05.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.293 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3940657 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3940657 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 3940657 ']' 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:05.293 11:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:05.293 [2024-06-10 11:33:29.978995] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:25:05.293 [2024-06-10 11:33:29.979055] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.293 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.293 [2024-06-10 11:33:30.098526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.293 [2024-06-10 11:33:30.179458] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.293 [2024-06-10 11:33:30.179504] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.293 [2024-06-10 11:33:30.179517] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.293 [2024-06-10 11:33:30.179529] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.293 [2024-06-10 11:33:30.179539] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.293 [2024-06-10 11:33:30.179565] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.860 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:05.860 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:25:05.860 11:33:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:05.860 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:05.860 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:05.860 11:33:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.860 11:33:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:05.860 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.861 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:05.861 [2024-06-10 11:33:30.946626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.861 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:05.861 11:33:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:05.861 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:05.861 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:06.119 Malloc0 00:25:06.119 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.119 11:33:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:06.119 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.119 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:06.119 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.119 11:33:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:06.119 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.119 11:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:06.119 [2024-06-10 11:33:31.005977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3940714 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3940714 /var/tmp/bdevperf.sock 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 3940714 ']' 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:06.119 11:33:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:25:06.119 [2024-06-10 11:33:31.061009] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:25:06.119 [2024-06-10 11:33:31.061067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940714 ] 00:25:06.119 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.119 [2024-06-10 11:33:31.181048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.378 [2024-06-10 11:33:31.265865] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.945 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:06.945 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:25:06.945 11:33:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:06.945 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.945 11:33:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:07.203 NVMe0n1 00:25:07.203 11:33:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:07.203 11:33:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:07.461 Running I/O for 10 seconds... 00:25:17.442 00:25:17.442 Latency(us) 00:25:17.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.442 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:25:17.442 Verification LBA range: start 0x0 length 0x4000 00:25:17.442 NVMe0n1 : 10.08 9136.44 35.69 0.00 0.00 111639.64 26214.40 78433.48 00:25:17.442 =================================================================================================================== 00:25:17.442 Total : 9136.44 35.69 0.00 0.00 111639.64 26214.40 78433.48 00:25:17.442 0 00:25:17.442 11:33:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3940714 00:25:17.442 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 3940714 ']' 00:25:17.442 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 3940714 00:25:17.442 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:25:17.442 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:17.442 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3940714 00:25:17.442 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:17.442 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:17.442 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3940714' 00:25:17.442 killing process with pid 3940714 00:25:17.442 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 3940714 00:25:17.442 Received shutdown signal, test time was about 10.000000 seconds 00:25:17.442 00:25:17.442 Latency(us) 00:25:17.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.442 =================================================================================================================== 00:25:17.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.442 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 3940714 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:17.701 rmmod nvme_tcp 00:25:17.701 rmmod nvme_fabrics 00:25:17.701 rmmod nvme_keyring 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3940657 ']' 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3940657 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 3940657 ']' 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 3940657 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:17.701 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3940657 00:25:17.961 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:17.961 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:17.961 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3940657' 00:25:17.961 killing process with pid 3940657 00:25:17.961 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 3940657 00:25:17.961 11:33:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 3940657 00:25:17.961 11:33:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:17.961 11:33:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:17.961 11:33:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:17.961 11:33:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.961 11:33:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:17.961 11:33:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.961 11:33:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.961 11:33:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.498 11:33:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:20.498 00:25:20.498 real 0m23.425s 00:25:20.498 user 0m25.926s 00:25:20.498 sys 0m8.324s 00:25:20.498 11:33:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:20.498 11:33:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:20.498 ************************************ 00:25:20.498 END TEST nvmf_queue_depth 00:25:20.498 ************************************ 00:25:20.498 11:33:45 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:25:20.498 11:33:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:20.498 11:33:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:20.498 11:33:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:20.498 ************************************ 00:25:20.498 START TEST nvmf_target_multipath 00:25:20.498 ************************************ 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:25:20.498 * Looking for test storage... 00:25:20.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:25:20.498 11:33:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:28.621 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.621 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:25:28.621 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:28.621 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:28.621 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:28.621 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:28.621 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:28.621 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:25:28.621 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:28.621 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:25:28.621 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:28.622 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:28.622 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:28.622 Found net devices under 0000:af:00.0: cvl_0_0 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:28.622 Found net devices under 0000:af:00.1: cvl_0_1 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.622 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:28.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:25:28.882 00:25:28.882 --- 10.0.0.2 ping statistics --- 00:25:28.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.882 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:25:28.882 00:25:28.882 --- 10.0.0.1 ping statistics --- 00:25:28.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.882 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:28.882 11:33:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:25:29.142 only one NIC for nvmf test 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.142 rmmod nvme_tcp 00:25:29.142 rmmod nvme_fabrics 00:25:29.142 rmmod nvme_keyring 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.142 11:33:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.050 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:31.050 11:33:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:25:31.050 11:33:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:25:31.050 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:31.050 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:31.309 00:25:31.309 real 0m10.999s 00:25:31.309 user 0m2.412s 00:25:31.309 sys 0m6.659s 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:31.309 11:33:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:31.309 ************************************ 00:25:31.309 END TEST nvmf_target_multipath 00:25:31.309 ************************************ 00:25:31.309 11:33:56 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:25:31.309 11:33:56 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:31.309 11:33:56 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:31.309 11:33:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:31.309 ************************************ 00:25:31.310 START TEST nvmf_zcopy 00:25:31.310 ************************************ 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:25:31.310 * Looking for test storage... 00:25:31.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:31.310 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:31.569 11:33:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:25:31.569 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:31.569 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.569 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:31.569 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:31.569 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:31.569 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.569 11:33:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.569 11:33:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.569 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:31.569 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:31.569 11:33:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:25:31.569 11:33:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:25:39.691 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:39.692 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:39.692 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:39.692 Found net devices under 0000:af:00.0: cvl_0_0 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:39.692 Found net devices under 0000:af:00.1: cvl_0_1 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:39.692 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:39.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:25:39.951 00:25:39.951 --- 10.0.0.2 ping statistics --- 00:25:39.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.951 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:25:39.951 00:25:39.951 --- 10.0.0.1 ping statistics --- 00:25:39.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.951 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3951635 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3951635 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 3951635 ']' 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:39.951 11:34:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.952 11:34:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:39.952 11:34:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:39.952 11:34:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:39.952 [2024-06-10 11:34:04.948013] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:25:39.952 [2024-06-10 11:34:04.948073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.952 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.210 [2024-06-10 11:34:05.064199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.210 [2024-06-10 11:34:05.148698] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.210 [2024-06-10 11:34:05.148739] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.210 [2024-06-10 11:34:05.148757] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.210 [2024-06-10 11:34:05.148769] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.210 [2024-06-10 11:34:05.148780] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.210 [2024-06-10 11:34:05.148813] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.778 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:40.778 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:25:40.778 11:34:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:40.778 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:40.778 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:41.037 [2024-06-10 11:34:05.895772] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:41.037 [2024-06-10 11:34:05.911935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:41.037 malloc0 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:41.037 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:41.038 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:41.038 11:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:41.038 11:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:25:41.038 11:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:25:41.038 11:34:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:25:41.038 11:34:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:25:41.038 11:34:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:41.038 11:34:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:41.038 { 00:25:41.038 "params": { 00:25:41.038 "name": "Nvme$subsystem", 00:25:41.038 "trtype": "$TEST_TRANSPORT", 00:25:41.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.038 "adrfam": "ipv4", 00:25:41.038 "trsvcid": "$NVMF_PORT", 00:25:41.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.038 "hdgst": ${hdgst:-false}, 00:25:41.038 "ddgst": ${ddgst:-false} 00:25:41.038 }, 00:25:41.038 "method": "bdev_nvme_attach_controller" 00:25:41.038 } 00:25:41.038 EOF 00:25:41.038 )") 00:25:41.038 11:34:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:25:41.038 11:34:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:25:41.038 11:34:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:25:41.038 11:34:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:41.038 "params": { 00:25:41.038 "name": "Nvme1", 00:25:41.038 "trtype": "tcp", 00:25:41.038 "traddr": "10.0.0.2", 00:25:41.038 "adrfam": "ipv4", 00:25:41.038 "trsvcid": "4420", 00:25:41.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:41.038 "hdgst": false, 00:25:41.038 "ddgst": false 00:25:41.038 }, 00:25:41.038 "method": "bdev_nvme_attach_controller" 00:25:41.038 }' 00:25:41.038 [2024-06-10 11:34:05.993197] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:25:41.038 [2024-06-10 11:34:05.993263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951710 ] 00:25:41.038 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.038 [2024-06-10 11:34:06.113597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.296 [2024-06-10 11:34:06.200086] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.555 Running I/O for 10 seconds... 00:25:51.602 00:25:51.602 Latency(us) 00:25:51.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.602 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:25:51.602 Verification LBA range: start 0x0 length 0x1000 00:25:51.602 Nvme1n1 : 10.01 6382.68 49.86 0.00 0.00 19991.61 514.46 36280.73 00:25:51.602 =================================================================================================================== 00:25:51.602 Total : 6382.68 49.86 0.00 0.00 19991.61 514.46 36280.73 00:25:51.863 11:34:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3953561 00:25:51.863 11:34:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:25:51.863 11:34:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:51.863 11:34:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:25:51.863 11:34:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:25:51.863 11:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:25:51.863 11:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:25:51.863 11:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:51.863 11:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:51.863 { 00:25:51.863 "params": { 00:25:51.863 "name": "Nvme$subsystem", 00:25:51.863 "trtype": "$TEST_TRANSPORT", 00:25:51.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:51.863 "adrfam": "ipv4", 00:25:51.863 "trsvcid": "$NVMF_PORT", 00:25:51.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:51.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:51.863 "hdgst": ${hdgst:-false}, 00:25:51.863 "ddgst": ${ddgst:-false} 00:25:51.863 }, 00:25:51.863 "method": "bdev_nvme_attach_controller" 00:25:51.863 } 00:25:51.863 EOF 00:25:51.863 )") 00:25:51.863 [2024-06-10 11:34:16.761876] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.761913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 11:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:25:51.863 11:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:25:51.863 [2024-06-10 11:34:16.769866] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.769884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 11:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:25:51.863 11:34:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:51.863 "params": { 00:25:51.863 "name": "Nvme1", 00:25:51.863 "trtype": "tcp", 00:25:51.863 "traddr": "10.0.0.2", 00:25:51.863 "adrfam": "ipv4", 00:25:51.863 "trsvcid": "4420", 00:25:51.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:51.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:51.863 "hdgst": false, 00:25:51.863 "ddgst": false 00:25:51.863 }, 00:25:51.863 "method": "bdev_nvme_attach_controller" 00:25:51.863 }' 00:25:51.863 [2024-06-10 11:34:16.777886] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.777902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.785908] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.785923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.793929] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.793944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.801949] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.801964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.804696] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:25:51.863 [2024-06-10 11:34:16.804754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953561 ] 00:25:51.863 [2024-06-10 11:34:16.809973] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.809988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.817995] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.818010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.826017] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.826033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.834040] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.834055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.842062] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.842077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.850085] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.850100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.863 [2024-06-10 11:34:16.858108] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.858123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.866130] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.866145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.874152] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.874167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.882174] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.882189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.890197] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.890212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.898220] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.898236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.906243] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.906257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.914265] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.914279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.922287] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.922302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.924994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.863 [2024-06-10 11:34:16.930308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.930324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.938330] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.938346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.946351] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.946366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.954372] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.954387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.863 [2024-06-10 11:34:16.962394] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.863 [2024-06-10 11:34:16.962410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.123 [2024-06-10 11:34:16.974433] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.123 [2024-06-10 11:34:16.974459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.123 [2024-06-10 11:34:16.982447] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.123 [2024-06-10 11:34:16.982462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.123 [2024-06-10 11:34:16.990478] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.123 [2024-06-10 11:34:16.990497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.123 [2024-06-10 11:34:16.998494] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.123 [2024-06-10 11:34:16.998508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.123 [2024-06-10 11:34:17.006516] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.006531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.009126] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.124 [2024-06-10 11:34:17.014538] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.014554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.026584] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.026606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.038617] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.038636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.050645] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.050663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.062670] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.062687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.074704] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.074720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.086734] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.086749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.098770] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.098786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.110828] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.110853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.122848] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.122867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.134885] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.134905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.146916] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.146935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.158948] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.158965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.170985] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.171000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.183022] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.183038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.195059] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.195076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.207091] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.207106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.124 [2024-06-10 11:34:17.219125] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.124 [2024-06-10 11:34:17.219140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.231163] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.231182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.243189] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.243204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.255228] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.255243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.267265] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.267280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.279301] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.279318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.291354] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.291377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 Running I/O for 5 seconds... 00:25:52.384 [2024-06-10 11:34:17.306134] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.306158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.322716] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.322742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.341006] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.341032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.355444] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.355469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.371948] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.371972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.389350] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.389375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.404977] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.405002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.416592] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.416618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.434191] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.434216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.450230] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.450254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.467938] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.467963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.384 [2024-06-10 11:34:17.483031] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.384 [2024-06-10 11:34:17.483055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.500759] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.500784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.516542] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.516567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.527931] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.527955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.545152] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.545176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.561615] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.561640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.579104] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.579134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.595776] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.595801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.611855] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.611880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.630664] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.630689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.644974] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.644999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.662559] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.662593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.677469] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.677494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.689376] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.689400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.706645] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.706669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.721849] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.721874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.644 [2024-06-10 11:34:17.739480] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.644 [2024-06-10 11:34:17.739505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.754612] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.754636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.771153] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.771177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.787573] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.787603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.805073] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.805098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.821016] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.821040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.839111] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.839136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.854765] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.854791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.866333] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.866357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.883042] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.883072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.898170] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.898195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.914337] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.914362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.931555] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.931586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.948130] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.948155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.964244] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.964267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.982365] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.982390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.904 [2024-06-10 11:34:17.997868] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.904 [2024-06-10 11:34:17.997892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.016173] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.016198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.031669] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.031694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.042602] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.042626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.059694] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.059719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.074197] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.074221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.090278] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.090303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.106562] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.106592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.123941] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.123964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.140315] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.140339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.158735] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.158759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.173007] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.173031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.189509] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.189539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.206740] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.206764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.223268] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.223292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.239489] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.239514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.164 [2024-06-10 11:34:18.258125] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.164 [2024-06-10 11:34:18.258150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.272538] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.272563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.284024] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.284049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.301770] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.301794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.316178] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.316203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.332290] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.332315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.349799] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.349823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.365614] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.365638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.381788] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.381813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.393744] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.393768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.411082] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.411107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.426672] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.426696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.445254] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.445280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.458676] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.458700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.474883] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.474907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.492042] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.492071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.508213] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.508238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.424 [2024-06-10 11:34:18.519823] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.424 [2024-06-10 11:34:18.519848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.536492] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.536516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.551571] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.551602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.567827] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.567852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.585848] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.585873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.599973] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.599998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.617549] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.617574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.633806] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.633830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.651953] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.651978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.666483] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.666508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.677611] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.677636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.695292] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.695317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.709591] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.709616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.726393] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.726418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.742592] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.742617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.754601] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.754626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.772035] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.772060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.685 [2024-06-10 11:34:18.787241] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.685 [2024-06-10 11:34:18.787271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:18.804707] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:18.804732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:18.818790] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:18.818815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:18.835078] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:18.835103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:18.851316] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:18.851341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:18.867688] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:18.867714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:18.884859] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:18.884888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:18.903147] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:18.903173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:18.917743] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:18.917767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:18.933442] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:18.933466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:18.950101] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:18.950127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:18.968231] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:18.968257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:18.982646] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:18.982671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:19.000376] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:19.000401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:19.014747] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:19.014772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:19.030764] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:19.030789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.945 [2024-06-10 11:34:19.048673] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.945 [2024-06-10 11:34:19.048698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.064083] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.064108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.075374] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.075399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.092387] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.092412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.108000] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.108025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.117928] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.117953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.133240] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.133265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.148970] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.148995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.160352] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.160376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.177453] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.177478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.193946] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.193970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.210616] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.210640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.227912] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.227936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.244342] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.244366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.261219] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.261245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.277890] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.277913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.205 [2024-06-10 11:34:19.294939] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.205 [2024-06-10 11:34:19.294963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.311256] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.311280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.328460] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.328484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.343389] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.343413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.360144] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.360169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.376657] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.376680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.394747] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.394772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.408943] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.408967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.425315] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.425339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.441862] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.441887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.458096] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.458120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.475736] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.475759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.491615] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.491638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.509332] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.509357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.525313] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.525338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.542593] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.542617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.465 [2024-06-10 11:34:19.559995] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.465 [2024-06-10 11:34:19.560019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.575581] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.575606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.587074] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.587098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.603537] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.603561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.620074] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.620098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.636739] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.636763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.654009] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.654033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.670508] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.670532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.687567] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.687599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.704162] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.704185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.720793] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.720816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.737427] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.737452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.754819] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.754844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.771552] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.771584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.787789] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.787813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.804638] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.804663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.725 [2024-06-10 11:34:19.822135] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.725 [2024-06-10 11:34:19.822159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:19.838590] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:19.838614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:19.857151] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:19.857176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:19.872148] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:19.872172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:19.888640] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:19.888665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:19.907409] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:19.907434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:19.921768] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:19.921792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:19.938068] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:19.938092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:19.955145] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:19.955169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:19.971590] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:19.971615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:19.989340] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:19.989364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:20.006167] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:20.006198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:20.021137] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:20.021164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:20.037951] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:20.037976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:20.054063] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:20.054088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:20.070376] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:20.070401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.985 [2024-06-10 11:34:20.079997] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.985 [2024-06-10 11:34:20.080022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.244 [2024-06-10 11:34:20.094183] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.244 [2024-06-10 11:34:20.094208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.244 [2024-06-10 11:34:20.110264] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.244 [2024-06-10 11:34:20.110289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.244 [2024-06-10 11:34:20.127494] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.244 [2024-06-10 11:34:20.127519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.244 [2024-06-10 11:34:20.144080] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.244 [2024-06-10 11:34:20.144104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.244 [2024-06-10 11:34:20.160377] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.244 [2024-06-10 11:34:20.160401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.244 [2024-06-10 11:34:20.178538] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.244 [2024-06-10 11:34:20.178562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.244 [2024-06-10 11:34:20.192864] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.244 [2024-06-10 11:34:20.192889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.244 [2024-06-10 11:34:20.204370] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.245 [2024-06-10 11:34:20.204394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.245 [2024-06-10 11:34:20.221634] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.245 [2024-06-10 11:34:20.221659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.245 [2024-06-10 11:34:20.237174] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.245 [2024-06-10 11:34:20.237198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.245 [2024-06-10 11:34:20.249008] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.245 [2024-06-10 11:34:20.249032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.245 [2024-06-10 11:34:20.265137] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.245 [2024-06-10 11:34:20.265162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.245 [2024-06-10 11:34:20.281893] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.245 [2024-06-10 11:34:20.281918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.245 [2024-06-10 11:34:20.298259] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.245 [2024-06-10 11:34:20.298289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.245 [2024-06-10 11:34:20.314666] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.245 [2024-06-10 11:34:20.314691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.245 [2024-06-10 11:34:20.331239] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.245 [2024-06-10 11:34:20.331264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.245 [2024-06-10 11:34:20.347594] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.245 [2024-06-10 11:34:20.347617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.364800] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.364825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.380905] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.380929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.397601] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.397626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.414653] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.414678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.431012] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.431037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.448277] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.448302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.464655] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.464680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.481756] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.481781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.497561] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.497592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.515008] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.515033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.531697] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.531722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.547947] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.547971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.565296] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.565321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.580977] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.581002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.590488] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.590513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.513 [2024-06-10 11:34:20.604557] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.513 [2024-06-10 11:34:20.604595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.621345] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.621370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.637847] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.637871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.654227] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.654252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.671623] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.671649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.689383] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.689408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.706328] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.706352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.722893] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.722917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.739193] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.739218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.755554] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.755585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.773164] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.773188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.789166] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.789191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.807005] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.807030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.822520] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.822545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.833924] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.833948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.850725] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.850750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.772 [2024-06-10 11:34:20.866112] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.772 [2024-06-10 11:34:20.866136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:20.883611] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:20.883636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:20.899894] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:20.899918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:20.916464] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:20.916495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:20.933015] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:20.933039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:20.949286] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:20.949310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:20.967451] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:20.967475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:20.983187] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:20.983211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:21.000794] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:21.000818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:21.015138] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:21.015164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:21.032226] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:21.032252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:21.048978] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:21.049002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:21.065455] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:21.065479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:21.081680] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:21.081704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:21.099209] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:21.099234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:21.115091] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:21.115115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.031 [2024-06-10 11:34:21.132799] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.031 [2024-06-10 11:34:21.132822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.289 [2024-06-10 11:34:21.148020] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.289 [2024-06-10 11:34:21.148045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.289 [2024-06-10 11:34:21.165584] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.289 [2024-06-10 11:34:21.165607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.289 [2024-06-10 11:34:21.180416] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.289 [2024-06-10 11:34:21.180440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.289 [2024-06-10 11:34:21.197146] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.289 [2024-06-10 11:34:21.197171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.289 [2024-06-10 11:34:21.213745] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.290 [2024-06-10 11:34:21.213769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.290 [2024-06-10 11:34:21.229977] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.290 [2024-06-10 11:34:21.230006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.290 [2024-06-10 11:34:21.246356] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.290 [2024-06-10 11:34:21.246380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.290 [2024-06-10 11:34:21.264742] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.290 [2024-06-10 11:34:21.264768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.290 [2024-06-10 11:34:21.279038] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.290 [2024-06-10 11:34:21.279063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.290 [2024-06-10 11:34:21.296734] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.290 [2024-06-10 11:34:21.296759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.290 [2024-06-10 11:34:21.311039] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.290 [2024-06-10 11:34:21.311063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.290 [2024-06-10 11:34:21.326897] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.290 [2024-06-10 11:34:21.326922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.290 [2024-06-10 11:34:21.344384] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.290 [2024-06-10 11:34:21.344408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.290 [2024-06-10 11:34:21.360267] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.290 [2024-06-10 11:34:21.360291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.290 [2024-06-10 11:34:21.377477] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.290 [2024-06-10 11:34:21.377501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.290 [2024-06-10 11:34:21.393630] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.290 [2024-06-10 11:34:21.393654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.404996] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.405021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.423077] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.423102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.437277] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.437301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.454014] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.454039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.470452] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.470475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.487526] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.487551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.503952] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.503975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.520320] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.520344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.537218] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.537242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.553961] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.553985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.571475] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.571499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.587522] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.587546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.606519] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.606543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.621677] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.621700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.549 [2024-06-10 11:34:21.640087] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.549 [2024-06-10 11:34:21.640112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.654597] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.654621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.664013] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.664037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.680784] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.680808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.698796] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.698820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.714827] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.714850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.732449] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.732474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.749073] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.749096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.765346] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.765370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.781695] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.781719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.799310] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.799334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.814461] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.814485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.832029] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.832053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.848252] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.848275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.865681] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.865706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.880842] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.880867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.898539] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.898565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.808 [2024-06-10 11:34:21.912497] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.808 [2024-06-10 11:34:21.912523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:21.928813] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:21.928838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:21.946477] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:21.946504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:21.961671] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:21.961696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:21.979004] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:21.979029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:21.994921] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:21.994947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:22.012419] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:22.012444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:22.029240] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:22.029265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:22.044912] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:22.044937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:22.056607] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:22.056632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:22.072838] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:22.072862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:22.089964] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:22.089992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:22.106918] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:22.106942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:22.123269] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:22.123293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:22.139847] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:22.139870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:22.155728] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:22.155753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.067 [2024-06-10 11:34:22.167303] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.067 [2024-06-10 11:34:22.167328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.326 [2024-06-10 11:34:22.183264] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.183289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.199947] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.199971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.216172] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.216196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.232852] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.232876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.249980] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.250004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.266157] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.266181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.283025] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.283049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.299295] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.299319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.315372] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.315397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 00:25:57.327 Latency(us) 00:25:57.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.327 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:25:57.327 Nvme1n1 : 5.01 12508.19 97.72 0.00 0.00 10221.42 4508.88 16882.07 00:25:57.327 =================================================================================================================== 00:25:57.327 Total : 12508.19 97.72 0.00 0.00 10221.42 4508.88 16882.07 00:25:57.327 [2024-06-10 11:34:22.327219] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.327242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.339250] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.339270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.351289] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.351313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.363318] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.363339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.375349] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.375375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.387397] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.387415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.399430] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.399447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.411461] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.411482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.327 [2024-06-10 11:34:22.423495] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.327 [2024-06-10 11:34:22.423513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.586 [2024-06-10 11:34:22.435526] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.586 [2024-06-10 11:34:22.435542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.586 [2024-06-10 11:34:22.447558] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.586 [2024-06-10 11:34:22.447573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.586 [2024-06-10 11:34:22.459599] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.586 [2024-06-10 11:34:22.459617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.586 [2024-06-10 11:34:22.471628] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.586 [2024-06-10 11:34:22.471643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.586 [2024-06-10 11:34:22.483658] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.586 [2024-06-10 11:34:22.483675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.586 [2024-06-10 11:34:22.495690] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.586 [2024-06-10 11:34:22.495706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.586 [2024-06-10 11:34:22.507723] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.586 [2024-06-10 11:34:22.507738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.586 [2024-06-10 11:34:22.519757] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:57.586 [2024-06-10 11:34:22.519771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:57.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3953561) - No such process 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3953561 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:57.586 delay0 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.586 11:34:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:25:57.586 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.586 [2024-06-10 11:34:22.671637] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:04.153 Initializing NVMe Controllers 00:26:04.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:04.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:04.153 Initialization complete. Launching workers. 00:26:04.153 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 249 00:26:04.153 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 536, failed to submit 33 00:26:04.153 success 328, unsuccess 208, failed 0 00:26:04.153 11:34:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:26:04.153 11:34:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:26:04.153 11:34:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:04.153 11:34:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:26:04.153 11:34:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:04.153 11:34:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:26:04.153 11:34:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:04.153 11:34:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:04.153 rmmod nvme_tcp 00:26:04.153 rmmod nvme_fabrics 00:26:04.153 rmmod nvme_keyring 00:26:04.153 11:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:04.153 11:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:26:04.153 11:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:26:04.153 11:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3951635 ']' 00:26:04.153 11:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3951635 00:26:04.153 11:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 3951635 ']' 00:26:04.154 11:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 3951635 00:26:04.154 11:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:26:04.154 11:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:04.154 11:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3951635 00:26:04.154 11:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:04.154 11:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:04.154 11:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3951635' 00:26:04.154 killing process with pid 3951635 00:26:04.154 11:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 3951635 00:26:04.154 11:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 3951635 00:26:04.413 11:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:04.413 11:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:04.413 11:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:04.413 11:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:04.413 11:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:04.413 11:34:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.413 11:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.413 11:34:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.318 11:34:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:06.318 00:26:06.318 real 0m35.106s 00:26:06.318 user 0m43.808s 00:26:06.318 sys 0m14.098s 00:26:06.318 11:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:06.318 11:34:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:06.318 ************************************ 00:26:06.318 END TEST nvmf_zcopy 00:26:06.318 ************************************ 00:26:06.318 11:34:31 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:26:06.318 11:34:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:06.318 11:34:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:06.318 11:34:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:06.577 ************************************ 00:26:06.577 START TEST nvmf_nmic 00:26:06.577 ************************************ 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:26:06.577 * Looking for test storage... 00:26:06.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:06.577 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:26:06.578 11:34:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.560 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:16.561 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:16.561 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:16.561 Found net devices under 0000:af:00.0: cvl_0_0 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:16.561 Found net devices under 0000:af:00.1: cvl_0_1 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.561 11:34:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:16.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:26:16.561 00:26:16.561 --- 10.0.0.2 ping statistics --- 00:26:16.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.561 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:26:16.561 00:26:16.561 --- 10.0.0.1 ping statistics --- 00:26:16.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.561 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3960108 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3960108 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 3960108 ']' 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:16.561 11:34:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:16.561 [2024-06-10 11:34:40.261905] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:26:16.561 [2024-06-10 11:34:40.261965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.561 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.561 [2024-06-10 11:34:40.389979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:16.561 [2024-06-10 11:34:40.478861] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.561 [2024-06-10 11:34:40.478909] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.561 [2024-06-10 11:34:40.478923] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.561 [2024-06-10 11:34:40.478935] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.561 [2024-06-10 11:34:40.478945] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.561 [2024-06-10 11:34:40.479053] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.561 [2024-06-10 11:34:40.479147] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.561 [2024-06-10 11:34:40.479265] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.561 [2024-06-10 11:34:40.479265] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:16.561 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:16.561 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:26:16.561 11:34:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:16.561 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:16.562 [2024-06-10 11:34:41.228855] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:16.562 Malloc0 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:16.562 [2024-06-10 11:34:41.284858] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:26:16.562 test case1: single bdev can't be used in multiple subsystems 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:16.562 [2024-06-10 11:34:41.308712] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:26:16.562 [2024-06-10 11:34:41.308738] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:26:16.562 [2024-06-10 11:34:41.308751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:16.562 request: 00:26:16.562 { 00:26:16.562 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:26:16.562 "namespace": { 00:26:16.562 "bdev_name": "Malloc0", 00:26:16.562 "no_auto_visible": false 00:26:16.562 }, 00:26:16.562 "method": "nvmf_subsystem_add_ns", 00:26:16.562 "req_id": 1 00:26:16.562 } 00:26:16.562 Got JSON-RPC error response 00:26:16.562 response: 00:26:16.562 { 00:26:16.562 "code": -32602, 00:26:16.562 "message": "Invalid parameters" 00:26:16.562 } 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:26:16.562 Adding namespace failed - expected result. 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:26:16.562 test case2: host connect to nvmf target in multiple paths 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:16.562 [2024-06-10 11:34:41.324873] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.562 11:34:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:17.506 11:34:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:26:18.885 11:34:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:26:18.885 11:34:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:26:18.885 11:34:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.885 11:34:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:26:18.885 11:34:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:26:21.420 11:34:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:26:21.420 11:34:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:21.420 11:34:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:26:21.420 11:34:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:26:21.420 11:34:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.420 11:34:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:26:21.420 11:34:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:26:21.420 [global] 00:26:21.420 thread=1 00:26:21.420 invalidate=1 00:26:21.420 rw=write 00:26:21.420 time_based=1 00:26:21.420 runtime=1 00:26:21.420 ioengine=libaio 00:26:21.420 direct=1 00:26:21.420 bs=4096 00:26:21.420 iodepth=1 00:26:21.420 norandommap=0 00:26:21.420 numjobs=1 00:26:21.420 00:26:21.420 verify_dump=1 00:26:21.420 verify_backlog=512 00:26:21.420 verify_state_save=0 00:26:21.420 do_verify=1 00:26:21.420 verify=crc32c-intel 00:26:21.420 [job0] 00:26:21.420 filename=/dev/nvme0n1 00:26:21.420 Could not set queue depth (nvme0n1) 00:26:21.420 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:21.420 fio-3.35 00:26:21.420 Starting 1 thread 00:26:22.798 00:26:22.798 job0: (groupid=0, jobs=1): err= 0: pid=3961287: Mon Jun 10 11:34:47 2024 00:26:22.798 read: IOPS=20, BW=81.7KiB/s (83.7kB/s)(84.0KiB/1028msec) 00:26:22.798 slat (nsec): min=11495, max=28060, avg=25853.29, stdev=3342.28 00:26:22.798 clat (usec): min=40922, max=42302, avg=41168.33, stdev=423.74 00:26:22.798 lat (usec): min=40948, max=42313, avg=41194.19, stdev=421.78 00:26:22.798 clat percentiles (usec): 00:26:22.798 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:22.798 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:22.798 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:26:22.798 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:26:22.798 | 99.99th=[42206] 00:26:22.798 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:26:22.798 slat (usec): min=12, max=26730, avg=65.79, stdev=1180.72 00:26:22.798 clat (usec): min=228, max=448, avg=248.57, stdev=11.81 00:26:22.798 lat (usec): min=241, max=27099, avg=314.36, stdev=1186.11 00:26:22.798 clat percentiles (usec): 00:26:22.798 | 1.00th=[ 235], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 245], 00:26:22.798 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 247], 60.00th=[ 249], 00:26:22.798 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 255], 95.00th=[ 258], 00:26:22.798 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 449], 99.95th=[ 449], 00:26:22.798 | 99.99th=[ 449] 00:26:22.798 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:26:22.798 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:26:22.798 lat (usec) : 250=65.85%, 500=30.21% 00:26:22.798 lat (msec) : 50=3.94% 00:26:22.798 cpu : usr=0.39%, sys=1.17%, ctx=537, majf=0, minf=2 00:26:22.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:22.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:22.798 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:22.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:22.798 00:26:22.798 Run status group 0 (all jobs): 00:26:22.798 READ: bw=81.7KiB/s (83.7kB/s), 81.7KiB/s-81.7KiB/s (83.7kB/s-83.7kB/s), io=84.0KiB (86.0kB), run=1028-1028msec 00:26:22.798 WRITE: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2048KiB (2097kB), run=1028-1028msec 00:26:22.798 00:26:22.798 Disk stats (read/write): 00:26:22.798 nvme0n1: ios=43/512, merge=0/0, ticks=1688/125, in_queue=1813, util=98.70% 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:22.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:22.798 rmmod nvme_tcp 00:26:22.798 rmmod nvme_fabrics 00:26:22.798 rmmod nvme_keyring 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3960108 ']' 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3960108 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 3960108 ']' 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 3960108 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3960108 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3960108' 00:26:22.798 killing process with pid 3960108 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 3960108 00:26:22.798 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 3960108 00:26:23.057 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.057 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.057 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.057 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.057 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.057 11:34:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.057 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.057 11:34:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.060 11:34:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:25.060 00:26:25.060 real 0m18.622s 00:26:25.060 user 0m42.630s 00:26:25.060 sys 0m7.700s 00:26:25.060 11:34:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:25.060 11:34:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:25.060 ************************************ 00:26:25.060 END TEST nvmf_nmic 00:26:25.060 ************************************ 00:26:25.060 11:34:50 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:26:25.060 11:34:50 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:25.060 11:34:50 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:25.060 11:34:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.319 ************************************ 00:26:25.319 START TEST nvmf_fio_target 00:26:25.319 ************************************ 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:26:25.319 * Looking for test storage... 00:26:25.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:25.319 11:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:35.300 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:35.300 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:35.300 Found net devices under 0000:af:00.0: cvl_0_0 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:35.300 Found net devices under 0000:af:00.1: cvl_0_1 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:35.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:26:35.300 00:26:35.300 --- 10.0.0.2 ping statistics --- 00:26:35.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.300 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:35.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:26:35.300 00:26:35.300 --- 10.0.0.1 ping statistics --- 00:26:35.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.300 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:35.300 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.301 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:35.301 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:35.301 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.301 11:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3965942 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3965942 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 3965942 ']' 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:35.301 11:34:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 [2024-06-10 11:34:59.098085] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:26:35.301 [2024-06-10 11:34:59.098145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.301 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.301 [2024-06-10 11:34:59.226376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:35.301 [2024-06-10 11:34:59.313313] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.301 [2024-06-10 11:34:59.313357] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.301 [2024-06-10 11:34:59.313371] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.301 [2024-06-10 11:34:59.313384] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.301 [2024-06-10 11:34:59.313394] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.301 [2024-06-10 11:34:59.313470] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.301 [2024-06-10 11:34:59.313562] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.301 [2024-06-10 11:34:59.313676] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.301 [2024-06-10 11:34:59.313677] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.301 11:35:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:35.301 11:35:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:26:35.301 11:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:35.301 11:35:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:35.301 11:35:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 11:35:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.301 11:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:35.301 [2024-06-10 11:35:00.260566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.301 11:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:35.559 11:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:26:35.559 11:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:35.818 11:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:26:35.818 11:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:36.076 11:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:26:36.076 11:35:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:36.335 11:35:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:26:36.335 11:35:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:26:36.593 11:35:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:36.851 11:35:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:26:36.851 11:35:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:37.112 11:35:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:26:37.112 11:35:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:37.369 11:35:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:26:37.369 11:35:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:26:37.369 11:35:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:37.627 11:35:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:26:37.627 11:35:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:37.886 11:35:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:26:37.886 11:35:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:38.145 11:35:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:38.403 [2024-06-10 11:35:03.318247] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.403 11:35:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:26:38.661 11:35:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:26:38.920 11:35:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:40.298 11:35:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:26:40.298 11:35:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:26:40.298 11:35:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:26:40.298 11:35:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:26:40.298 11:35:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:26:40.298 11:35:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:26:42.205 11:35:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:26:42.205 11:35:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:42.205 11:35:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:26:42.205 11:35:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:26:42.205 11:35:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:26:42.205 11:35:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:26:42.205 11:35:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:26:42.205 [global] 00:26:42.205 thread=1 00:26:42.205 invalidate=1 00:26:42.205 rw=write 00:26:42.205 time_based=1 00:26:42.205 runtime=1 00:26:42.205 ioengine=libaio 00:26:42.205 direct=1 00:26:42.205 bs=4096 00:26:42.205 iodepth=1 00:26:42.205 norandommap=0 00:26:42.205 numjobs=1 00:26:42.205 00:26:42.205 verify_dump=1 00:26:42.205 verify_backlog=512 00:26:42.205 verify_state_save=0 00:26:42.205 do_verify=1 00:26:42.205 verify=crc32c-intel 00:26:42.205 [job0] 00:26:42.205 filename=/dev/nvme0n1 00:26:42.205 [job1] 00:26:42.205 filename=/dev/nvme0n2 00:26:42.205 [job2] 00:26:42.205 filename=/dev/nvme0n3 00:26:42.205 [job3] 00:26:42.205 filename=/dev/nvme0n4 00:26:42.205 Could not set queue depth (nvme0n1) 00:26:42.205 Could not set queue depth (nvme0n2) 00:26:42.205 Could not set queue depth (nvme0n3) 00:26:42.205 Could not set queue depth (nvme0n4) 00:26:42.463 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:42.463 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:42.463 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:42.463 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:42.463 fio-3.35 00:26:42.463 Starting 4 threads 00:26:43.843 00:26:43.843 job0: (groupid=0, jobs=1): err= 0: pid=3967611: Mon Jun 10 11:35:08 2024 00:26:43.843 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:26:43.843 slat (nsec): min=10870, max=26584, avg=23566.59, stdev=4078.15 00:26:43.843 clat (usec): min=680, max=42236, avg=39418.60, stdev=8666.18 00:26:43.843 lat (usec): min=707, max=42246, avg=39442.17, stdev=8665.46 00:26:43.843 clat percentiles (usec): 00:26:43.843 | 1.00th=[ 685], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:26:43.843 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:43.843 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:43.843 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:26:43.843 | 99.99th=[42206] 00:26:43.843 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:26:43.843 slat (usec): min=11, max=16642, avg=45.44, stdev=734.94 00:26:43.843 clat (usec): min=223, max=497, avg=258.96, stdev=21.22 00:26:43.843 lat (usec): min=236, max=17049, avg=304.40, stdev=741.78 00:26:43.843 clat percentiles (usec): 00:26:43.843 | 1.00th=[ 231], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 247], 00:26:43.843 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:26:43.843 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:26:43.843 | 99.00th=[ 310], 99.50th=[ 408], 99.90th=[ 498], 99.95th=[ 498], 00:26:43.843 | 99.99th=[ 498] 00:26:43.843 bw ( KiB/s): min= 4096, max= 4096, per=25.97%, avg=4096.00, stdev= 0.00, samples=1 00:26:43.843 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:26:43.843 lat (usec) : 250=30.34%, 500=65.54%, 750=0.19% 00:26:43.843 lat (msec) : 50=3.93% 00:26:43.843 cpu : usr=0.39%, sys=0.68%, ctx=536, majf=0, minf=2 00:26:43.843 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.843 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.843 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:43.843 job1: (groupid=0, jobs=1): err= 0: pid=3967612: Mon Jun 10 11:35:08 2024 00:26:43.843 read: IOPS=1100, BW=4401KiB/s (4507kB/s)(4564KiB/1037msec) 00:26:43.843 slat (nsec): min=3860, max=46120, avg=7787.31, stdev=3048.94 00:26:43.843 clat (usec): min=280, max=41055, avg=546.07, stdev=2073.60 00:26:43.843 lat (usec): min=285, max=41081, avg=553.85, stdev=2074.31 00:26:43.843 clat percentiles (usec): 00:26:43.843 | 1.00th=[ 297], 5.00th=[ 343], 10.00th=[ 359], 20.00th=[ 379], 00:26:43.843 | 30.00th=[ 408], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 461], 00:26:43.843 | 70.00th=[ 469], 80.00th=[ 486], 90.00th=[ 506], 95.00th=[ 529], 00:26:43.843 | 99.00th=[ 652], 99.50th=[ 758], 99.90th=[41157], 99.95th=[41157], 00:26:43.843 | 99.99th=[41157] 00:26:43.843 write: IOPS=1481, BW=5925KiB/s (6067kB/s)(6144KiB/1037msec); 0 zone resets 00:26:43.843 slat (usec): min=5, max=16527, avg=23.64, stdev=421.40 00:26:43.843 clat (usec): min=140, max=1053, avg=235.56, stdev=49.11 00:26:43.843 lat (usec): min=145, max=17170, avg=259.21, stdev=434.65 00:26:43.843 clat percentiles (usec): 00:26:43.843 | 1.00th=[ 145], 5.00th=[ 157], 10.00th=[ 194], 20.00th=[ 202], 00:26:43.843 | 30.00th=[ 212], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 243], 00:26:43.843 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 322], 00:26:43.843 | 99.00th=[ 367], 99.50th=[ 412], 99.90th=[ 644], 99.95th=[ 1057], 00:26:43.843 | 99.99th=[ 1057] 00:26:43.843 bw ( KiB/s): min= 4096, max= 8192, per=38.95%, avg=6144.00, stdev=2896.31, samples=2 00:26:43.843 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:26:43.843 lat (usec) : 250=39.52%, 500=54.73%, 750=5.49%, 1000=0.11% 00:26:43.843 lat (msec) : 2=0.04%, 50=0.11% 00:26:43.843 cpu : usr=2.12%, sys=4.05%, ctx=2679, majf=0, minf=1 00:26:43.843 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.843 issued rwts: total=1141,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.843 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:43.843 job2: (groupid=0, jobs=1): err= 0: pid=3967613: Mon Jun 10 11:35:08 2024 00:26:43.843 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:26:43.843 slat (nsec): min=8939, max=37390, avg=10833.20, stdev=3896.82 00:26:43.843 clat (usec): min=372, max=41963, avg=1349.57, stdev=5909.54 00:26:43.843 lat (usec): min=382, max=41988, avg=1360.40, stdev=5911.37 00:26:43.843 clat percentiles (usec): 00:26:43.843 | 1.00th=[ 379], 5.00th=[ 396], 10.00th=[ 408], 20.00th=[ 420], 00:26:43.843 | 30.00th=[ 437], 40.00th=[ 453], 50.00th=[ 482], 60.00th=[ 494], 00:26:43.843 | 70.00th=[ 506], 80.00th=[ 519], 90.00th=[ 537], 95.00th=[ 562], 00:26:43.843 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:26:43.843 | 99.99th=[42206] 00:26:43.843 write: IOPS=1015, BW=4064KiB/s (4161kB/s)(4068KiB/1001msec); 0 zone resets 00:26:43.843 slat (usec): min=12, max=16567, avg=29.79, stdev=519.09 00:26:43.843 clat (usec): min=205, max=935, avg=265.00, stdev=33.74 00:26:43.843 lat (usec): min=217, max=17015, avg=294.79, stdev=525.92 00:26:43.843 clat percentiles (usec): 00:26:43.843 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 245], 00:26:43.843 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:26:43.843 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:26:43.843 | 99.00th=[ 355], 99.50th=[ 396], 99.90th=[ 474], 99.95th=[ 938], 00:26:43.843 | 99.99th=[ 938] 00:26:43.843 bw ( KiB/s): min= 4096, max= 4096, per=25.97%, avg=4096.00, stdev= 0.00, samples=1 00:26:43.843 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:26:43.843 lat (usec) : 250=18.31%, 500=69.91%, 750=10.92%, 1000=0.07% 00:26:43.843 lat (msec) : 4=0.07%, 50=0.72% 00:26:43.843 cpu : usr=1.10%, sys=2.10%, ctx=1531, majf=0, minf=1 00:26:43.843 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.843 issued rwts: total=512,1017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.843 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:43.843 job3: (groupid=0, jobs=1): err= 0: pid=3967614: Mon Jun 10 11:35:08 2024 00:26:43.843 read: IOPS=507, BW=2030KiB/s (2078kB/s)(2052KiB/1011msec) 00:26:43.843 slat (nsec): min=9159, max=42479, avg=10434.36, stdev=3169.65 00:26:43.843 clat (usec): min=330, max=41946, avg=1382.53, stdev=5958.42 00:26:43.843 lat (usec): min=339, max=41972, avg=1392.96, stdev=5960.56 00:26:43.843 clat percentiles (usec): 00:26:43.843 | 1.00th=[ 351], 5.00th=[ 375], 10.00th=[ 392], 20.00th=[ 408], 00:26:43.843 | 30.00th=[ 416], 40.00th=[ 420], 50.00th=[ 429], 60.00th=[ 437], 00:26:43.843 | 70.00th=[ 453], 80.00th=[ 506], 90.00th=[ 586], 95.00th=[ 709], 00:26:43.843 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:26:43.843 | 99.99th=[42206] 00:26:43.843 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:26:43.843 slat (nsec): min=11438, max=40366, avg=14572.14, stdev=2690.96 00:26:43.843 clat (usec): min=181, max=1100, avg=270.43, stdev=53.00 00:26:43.843 lat (usec): min=194, max=1115, avg=285.00, stdev=53.41 00:26:43.843 clat percentiles (usec): 00:26:43.843 | 1.00th=[ 210], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 245], 00:26:43.843 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:26:43.843 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 330], 00:26:43.843 | 99.00th=[ 441], 99.50th=[ 594], 99.90th=[ 865], 99.95th=[ 1106], 00:26:43.843 | 99.99th=[ 1106] 00:26:43.843 bw ( KiB/s): min= 4096, max= 4096, per=25.97%, avg=4096.00, stdev= 0.00, samples=2 00:26:43.843 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:26:43.843 lat (usec) : 250=18.41%, 500=74.24%, 750=5.53%, 1000=0.85% 00:26:43.843 lat (msec) : 2=0.13%, 20=0.13%, 50=0.72% 00:26:43.843 cpu : usr=1.88%, sys=2.18%, ctx=1539, majf=0, minf=1 00:26:43.843 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.843 issued rwts: total=513,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:43.844 00:26:43.844 Run status group 0 (all jobs): 00:26:43.844 READ: bw=8440KiB/s (8642kB/s), 85.9KiB/s-4401KiB/s (87.9kB/s-4507kB/s), io=8752KiB (8962kB), run=1001-1037msec 00:26:43.844 WRITE: bw=15.4MiB/s (16.1MB/s), 1998KiB/s-5925KiB/s (2046kB/s-6067kB/s), io=16.0MiB (16.7MB), run=1001-1037msec 00:26:43.844 00:26:43.844 Disk stats (read/write): 00:26:43.844 nvme0n1: ios=38/512, merge=0/0, ticks=1445/129, in_queue=1574, util=86.47% 00:26:43.844 nvme0n2: ios=1076/1334, merge=0/0, ticks=645/299, in_queue=944, util=90.50% 00:26:43.844 nvme0n3: ios=406/512, merge=0/0, ticks=1489/141, in_queue=1630, util=94.63% 00:26:43.844 nvme0n4: ios=512/512, merge=0/0, ticks=1266/139, in_queue=1405, util=96.87% 00:26:43.844 11:35:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:26:43.844 [global] 00:26:43.844 thread=1 00:26:43.844 invalidate=1 00:26:43.844 rw=randwrite 00:26:43.844 time_based=1 00:26:43.844 runtime=1 00:26:43.844 ioengine=libaio 00:26:43.844 direct=1 00:26:43.844 bs=4096 00:26:43.844 iodepth=1 00:26:43.844 norandommap=0 00:26:43.844 numjobs=1 00:26:43.844 00:26:43.844 verify_dump=1 00:26:43.844 verify_backlog=512 00:26:43.844 verify_state_save=0 00:26:43.844 do_verify=1 00:26:43.844 verify=crc32c-intel 00:26:43.844 [job0] 00:26:43.844 filename=/dev/nvme0n1 00:26:43.844 [job1] 00:26:43.844 filename=/dev/nvme0n2 00:26:43.844 [job2] 00:26:43.844 filename=/dev/nvme0n3 00:26:43.844 [job3] 00:26:43.844 filename=/dev/nvme0n4 00:26:44.128 Could not set queue depth (nvme0n1) 00:26:44.128 Could not set queue depth (nvme0n2) 00:26:44.128 Could not set queue depth (nvme0n3) 00:26:44.128 Could not set queue depth (nvme0n4) 00:26:44.391 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:44.391 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:44.391 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:44.391 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:44.391 fio-3.35 00:26:44.391 Starting 4 threads 00:26:45.768 00:26:45.768 job0: (groupid=0, jobs=1): err= 0: pid=3968038: Mon Jun 10 11:35:10 2024 00:26:45.768 read: IOPS=756, BW=3025KiB/s (3098kB/s)(3028KiB/1001msec) 00:26:45.768 slat (nsec): min=8576, max=49577, avg=9703.64, stdev=2635.06 00:26:45.768 clat (usec): min=329, max=41978, avg=944.32, stdev=4412.53 00:26:45.768 lat (usec): min=338, max=42002, avg=954.02, stdev=4413.90 00:26:45.768 clat percentiles (usec): 00:26:45.768 | 1.00th=[ 347], 5.00th=[ 371], 10.00th=[ 396], 20.00th=[ 437], 00:26:45.768 | 30.00th=[ 457], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 474], 00:26:45.768 | 70.00th=[ 478], 80.00th=[ 482], 90.00th=[ 490], 95.00th=[ 545], 00:26:45.768 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:26:45.768 | 99.99th=[42206] 00:26:45.768 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:26:45.768 slat (nsec): min=11171, max=40976, avg=12825.26, stdev=2046.44 00:26:45.768 clat (usec): min=205, max=392, avg=253.76, stdev=27.00 00:26:45.768 lat (usec): min=217, max=433, avg=266.58, stdev=27.36 00:26:45.768 clat percentiles (usec): 00:26:45.768 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 227], 00:26:45.768 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 269], 60.00th=[ 273], 00:26:45.768 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 281], 95.00th=[ 285], 00:26:45.768 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 322], 99.95th=[ 392], 00:26:45.768 | 99.99th=[ 392] 00:26:45.768 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:26:45.768 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:26:45.768 lat (usec) : 250=27.85%, 500=69.06%, 750=2.53% 00:26:45.768 lat (msec) : 2=0.06%, 50=0.51% 00:26:45.768 cpu : usr=1.60%, sys=2.70%, ctx=1781, majf=0, minf=2 00:26:45.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.768 issued rwts: total=757,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:45.768 job1: (groupid=0, jobs=1): err= 0: pid=3968039: Mon Jun 10 11:35:10 2024 00:26:45.768 read: IOPS=21, BW=84.9KiB/s (86.9kB/s)(88.0KiB/1037msec) 00:26:45.768 slat (nsec): min=11500, max=31605, avg=24173.18, stdev=3811.77 00:26:45.768 clat (usec): min=40841, max=41963, avg=41200.99, stdev=406.54 00:26:45.768 lat (usec): min=40867, max=41994, avg=41225.17, stdev=407.18 00:26:45.769 clat percentiles (usec): 00:26:45.769 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:45.769 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:45.769 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:26:45.769 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:26:45.769 | 99.99th=[42206] 00:26:45.769 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:26:45.769 slat (nsec): min=11744, max=39596, avg=12853.52, stdev=1870.62 00:26:45.769 clat (usec): min=206, max=472, avg=236.86, stdev=16.24 00:26:45.769 lat (usec): min=218, max=512, avg=249.72, stdev=17.23 00:26:45.769 clat percentiles (usec): 00:26:45.769 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 227], 00:26:45.769 | 30.00th=[ 231], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 239], 00:26:45.769 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 260], 00:26:45.769 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 474], 99.95th=[ 474], 00:26:45.769 | 99.99th=[ 474] 00:26:45.769 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:26:45.769 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:26:45.769 lat (usec) : 250=81.84%, 500=14.04% 00:26:45.769 lat (msec) : 50=4.12% 00:26:45.769 cpu : usr=0.68%, sys=0.77%, ctx=534, majf=0, minf=1 00:26:45.769 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.769 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.769 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:45.769 job2: (groupid=0, jobs=1): err= 0: pid=3968040: Mon Jun 10 11:35:10 2024 00:26:45.769 read: IOPS=20, BW=82.1KiB/s (84.1kB/s)(84.0KiB/1023msec) 00:26:45.769 slat (nsec): min=11757, max=26476, avg=24517.14, stdev=3171.89 00:26:45.769 clat (usec): min=40795, max=41931, avg=41040.48, stdev=268.46 00:26:45.769 lat (usec): min=40821, max=41957, avg=41065.00, stdev=266.88 00:26:45.769 clat percentiles (usec): 00:26:45.769 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:26:45.769 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:45.769 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:26:45.769 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:26:45.769 | 99.99th=[41681] 00:26:45.769 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:26:45.769 slat (nsec): min=12310, max=38233, avg=13350.01, stdev=1842.74 00:26:45.769 clat (usec): min=218, max=511, avg=296.35, stdev=41.15 00:26:45.769 lat (usec): min=231, max=549, avg=309.70, stdev=41.45 00:26:45.769 clat percentiles (usec): 00:26:45.769 | 1.00th=[ 231], 5.00th=[ 245], 10.00th=[ 258], 20.00th=[ 269], 00:26:45.769 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 297], 00:26:45.769 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 355], 95.00th=[ 379], 00:26:45.769 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 510], 99.95th=[ 510], 00:26:45.769 | 99.99th=[ 510] 00:26:45.769 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:26:45.769 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:26:45.769 lat (usec) : 250=6.38%, 500=89.49%, 750=0.19% 00:26:45.769 lat (msec) : 50=3.94% 00:26:45.769 cpu : usr=0.49%, sys=0.59%, ctx=534, majf=0, minf=1 00:26:45.769 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.769 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.769 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:45.769 job3: (groupid=0, jobs=1): err= 0: pid=3968041: Mon Jun 10 11:35:10 2024 00:26:45.769 read: IOPS=115, BW=463KiB/s (474kB/s)(472KiB/1019msec) 00:26:45.769 slat (nsec): min=9039, max=27523, avg=12320.64, stdev=5770.15 00:26:45.769 clat (usec): min=348, max=42195, avg=7309.44, stdev=15287.23 00:26:45.769 lat (usec): min=357, max=42220, avg=7321.76, stdev=15291.93 00:26:45.769 clat percentiles (usec): 00:26:45.769 | 1.00th=[ 351], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 375], 00:26:45.769 | 30.00th=[ 396], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 437], 00:26:45.769 | 70.00th=[ 502], 80.00th=[ 594], 90.00th=[41157], 95.00th=[41157], 00:26:45.769 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:26:45.769 | 99.99th=[42206] 00:26:45.769 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:26:45.769 slat (nsec): min=12541, max=41192, avg=14512.69, stdev=2315.06 00:26:45.769 clat (usec): min=229, max=403, avg=283.24, stdev=21.01 00:26:45.769 lat (usec): min=242, max=435, avg=297.75, stdev=21.85 00:26:45.769 clat percentiles (usec): 00:26:45.769 | 1.00th=[ 237], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 269], 00:26:45.769 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:26:45.769 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 318], 00:26:45.769 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 404], 99.95th=[ 404], 00:26:45.769 | 99.99th=[ 404] 00:26:45.769 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:26:45.769 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:26:45.769 lat (usec) : 250=3.65%, 500=90.48%, 750=2.70% 00:26:45.769 lat (msec) : 50=3.17% 00:26:45.769 cpu : usr=0.88%, sys=0.88%, ctx=631, majf=0, minf=1 00:26:45.769 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.769 issued rwts: total=118,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.769 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:45.769 00:26:45.769 Run status group 0 (all jobs): 00:26:45.769 READ: bw=3541KiB/s (3626kB/s), 82.1KiB/s-3025KiB/s (84.1kB/s-3098kB/s), io=3672KiB (3760kB), run=1001-1037msec 00:26:45.769 WRITE: bw=9875KiB/s (10.1MB/s), 1975KiB/s-4092KiB/s (2022kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1037msec 00:26:45.769 00:26:45.769 Disk stats (read/write): 00:26:45.769 nvme0n1: ios=562/676, merge=0/0, ticks=611/176, in_queue=787, util=81.66% 00:26:45.769 nvme0n2: ios=66/512, merge=0/0, ticks=736/115, in_queue=851, util=86.19% 00:26:45.769 nvme0n3: ios=39/512, merge=0/0, ticks=1565/148, in_queue=1713, util=97.16% 00:26:45.769 nvme0n4: ios=147/512, merge=0/0, ticks=1336/140, in_queue=1476, util=99.67% 00:26:45.769 11:35:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:26:45.769 [global] 00:26:45.769 thread=1 00:26:45.769 invalidate=1 00:26:45.769 rw=write 00:26:45.769 time_based=1 00:26:45.769 runtime=1 00:26:45.769 ioengine=libaio 00:26:45.769 direct=1 00:26:45.769 bs=4096 00:26:45.769 iodepth=128 00:26:45.769 norandommap=0 00:26:45.769 numjobs=1 00:26:45.769 00:26:45.769 verify_dump=1 00:26:45.769 verify_backlog=512 00:26:45.769 verify_state_save=0 00:26:45.769 do_verify=1 00:26:45.769 verify=crc32c-intel 00:26:45.769 [job0] 00:26:45.769 filename=/dev/nvme0n1 00:26:45.769 [job1] 00:26:45.769 filename=/dev/nvme0n2 00:26:45.769 [job2] 00:26:45.769 filename=/dev/nvme0n3 00:26:45.769 [job3] 00:26:45.769 filename=/dev/nvme0n4 00:26:45.769 Could not set queue depth (nvme0n1) 00:26:45.769 Could not set queue depth (nvme0n2) 00:26:45.769 Could not set queue depth (nvme0n3) 00:26:45.769 Could not set queue depth (nvme0n4) 00:26:46.028 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:46.028 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:46.028 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:46.028 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:46.028 fio-3.35 00:26:46.028 Starting 4 threads 00:26:47.407 00:26:47.407 job0: (groupid=0, jobs=1): err= 0: pid=3968466: Mon Jun 10 11:35:12 2024 00:26:47.407 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:26:47.407 slat (usec): min=2, max=6144, avg=102.96, stdev=586.22 00:26:47.407 clat (usec): min=7611, max=21512, avg=13463.55, stdev=1982.18 00:26:47.407 lat (usec): min=8244, max=21522, avg=13566.51, stdev=2008.98 00:26:47.407 clat percentiles (usec): 00:26:47.407 | 1.00th=[ 8586], 5.00th=[10159], 10.00th=[10945], 20.00th=[11994], 00:26:47.408 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13304], 60.00th=[13960], 00:26:47.408 | 70.00th=[14615], 80.00th=[15139], 90.00th=[16057], 95.00th=[16450], 00:26:47.408 | 99.00th=[18482], 99.50th=[19268], 99.90th=[19530], 99.95th=[20317], 00:26:47.408 | 99.99th=[21627] 00:26:47.408 write: IOPS=4705, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1006msec); 0 zone resets 00:26:47.408 slat (usec): min=3, max=6567, avg=103.28, stdev=566.25 00:26:47.408 clat (usec): min=401, max=35557, avg=13772.82, stdev=3588.86 00:26:47.408 lat (usec): min=5752, max=35564, avg=13876.10, stdev=3594.63 00:26:47.408 clat percentiles (usec): 00:26:47.408 | 1.00th=[ 6390], 5.00th=[ 9372], 10.00th=[11338], 20.00th=[12387], 00:26:47.408 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13435], 60.00th=[14222], 00:26:47.408 | 70.00th=[14353], 80.00th=[14484], 90.00th=[15139], 95.00th=[17433], 00:26:47.408 | 99.00th=[32375], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:26:47.408 | 99.99th=[35390] 00:26:47.408 bw ( KiB/s): min=17856, max=19056, per=28.01%, avg=18456.00, stdev=848.53, samples=2 00:26:47.408 iops : min= 4464, max= 4764, avg=4614.00, stdev=212.13, samples=2 00:26:47.408 lat (usec) : 500=0.01% 00:26:47.408 lat (msec) : 10=4.97%, 20=93.37%, 50=1.65% 00:26:47.408 cpu : usr=5.57%, sys=6.47%, ctx=471, majf=0, minf=1 00:26:47.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:26:47.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:47.408 issued rwts: total=4608,4734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:47.408 job1: (groupid=0, jobs=1): err= 0: pid=3968468: Mon Jun 10 11:35:12 2024 00:26:47.408 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:26:47.408 slat (usec): min=2, max=13619, avg=123.37, stdev=859.54 00:26:47.408 clat (usec): min=3885, max=40778, avg=15879.11, stdev=5276.25 00:26:47.408 lat (usec): min=3896, max=40783, avg=16002.47, stdev=5332.01 00:26:47.408 clat percentiles (usec): 00:26:47.408 | 1.00th=[ 5866], 5.00th=[10683], 10.00th=[11863], 20.00th=[13435], 00:26:47.408 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14746], 00:26:47.408 | 70.00th=[15795], 80.00th=[18220], 90.00th=[22152], 95.00th=[25822], 00:26:47.408 | 99.00th=[37487], 99.50th=[38536], 99.90th=[40633], 99.95th=[40633], 00:26:47.408 | 99.99th=[40633] 00:26:47.408 write: IOPS=3887, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1010msec); 0 zone resets 00:26:47.408 slat (usec): min=4, max=10693, avg=125.17, stdev=570.59 00:26:47.408 clat (usec): min=1953, max=49455, avg=18071.46, stdev=9511.46 00:26:47.408 lat (usec): min=1970, max=49463, avg=18196.62, stdev=9575.42 00:26:47.408 clat percentiles (usec): 00:26:47.408 | 1.00th=[ 5407], 5.00th=[ 7570], 10.00th=[ 9503], 20.00th=[11469], 00:26:47.408 | 30.00th=[13304], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:26:47.408 | 70.00th=[18482], 80.00th=[26084], 90.00th=[31327], 95.00th=[38536], 00:26:47.408 | 99.00th=[48497], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:26:47.408 | 99.99th=[49546] 00:26:47.408 bw ( KiB/s): min=10760, max=19632, per=23.07%, avg=15196.00, stdev=6273.45, samples=2 00:26:47.408 iops : min= 2690, max= 4908, avg=3799.00, stdev=1568.36, samples=2 00:26:47.408 lat (msec) : 2=0.03%, 4=0.01%, 10=9.95%, 20=67.91%, 50=22.10% 00:26:47.408 cpu : usr=4.66%, sys=5.45%, ctx=502, majf=0, minf=1 00:26:47.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:47.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:47.408 issued rwts: total=3584,3926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:47.408 job2: (groupid=0, jobs=1): err= 0: pid=3968470: Mon Jun 10 11:35:12 2024 00:26:47.408 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:26:47.408 slat (usec): min=2, max=28071, avg=146.81, stdev=1056.95 00:26:47.408 clat (usec): min=6080, max=80389, avg=19541.02, stdev=13694.57 00:26:47.408 lat (usec): min=6929, max=80404, avg=19687.82, stdev=13775.49 00:26:47.408 clat percentiles (usec): 00:26:47.408 | 1.00th=[ 8586], 5.00th=[10159], 10.00th=[10814], 20.00th=[11994], 00:26:47.408 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13698], 60.00th=[17171], 00:26:47.408 | 70.00th=[20055], 80.00th=[22938], 90.00th=[31851], 95.00th=[60031], 00:26:47.408 | 99.00th=[73925], 99.50th=[73925], 99.90th=[80217], 99.95th=[80217], 00:26:47.408 | 99.99th=[80217] 00:26:47.408 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1006msec); 0 zone resets 00:26:47.408 slat (usec): min=4, max=17970, avg=107.08, stdev=637.60 00:26:47.408 clat (usec): min=1982, max=51398, avg=14521.82, stdev=7302.69 00:26:47.408 lat (usec): min=2001, max=51416, avg=14628.90, stdev=7348.09 00:26:47.408 clat percentiles (usec): 00:26:47.408 | 1.00th=[ 5473], 5.00th=[ 6783], 10.00th=[ 8356], 20.00th=[ 9503], 00:26:47.408 | 30.00th=[10290], 40.00th=[12125], 50.00th=[13042], 60.00th=[13435], 00:26:47.408 | 70.00th=[17171], 80.00th=[18220], 90.00th=[23987], 95.00th=[26084], 00:26:47.408 | 99.00th=[44827], 99.50th=[48497], 99.90th=[51643], 99.95th=[51643], 00:26:47.408 | 99.99th=[51643] 00:26:47.408 bw ( KiB/s): min=10048, max=20480, per=23.17%, avg=15264.00, stdev=7376.54, samples=2 00:26:47.408 iops : min= 2512, max= 5120, avg=3816.00, stdev=1844.13, samples=2 00:26:47.408 lat (msec) : 2=0.07%, 4=0.17%, 10=16.59%, 20=62.75%, 50=17.27% 00:26:47.408 lat (msec) : 100=3.15% 00:26:47.408 cpu : usr=4.78%, sys=6.67%, ctx=389, majf=0, minf=1 00:26:47.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:47.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:47.408 issued rwts: total=3584,3943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:47.408 job3: (groupid=0, jobs=1): err= 0: pid=3968471: Mon Jun 10 11:35:12 2024 00:26:47.408 read: IOPS=4361, BW=17.0MiB/s (17.9MB/s)(17.8MiB/1045msec) 00:26:47.408 slat (usec): min=2, max=27885, avg=121.09, stdev=929.31 00:26:47.408 clat (usec): min=6741, max=65579, avg=17076.91, stdev=9640.99 00:26:47.408 lat (usec): min=6749, max=72524, avg=17198.00, stdev=9700.43 00:26:47.408 clat percentiles (usec): 00:26:47.408 | 1.00th=[ 9110], 5.00th=[10421], 10.00th=[10421], 20.00th=[10945], 00:26:47.408 | 30.00th=[11207], 40.00th=[13960], 50.00th=[15008], 60.00th=[16057], 00:26:47.408 | 70.00th=[16319], 80.00th=[19530], 90.00th=[24773], 95.00th=[43779], 00:26:47.408 | 99.00th=[57934], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:26:47.408 | 99.99th=[65799] 00:26:47.408 write: IOPS=4409, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1045msec); 0 zone resets 00:26:47.408 slat (usec): min=4, max=12465, avg=81.87, stdev=477.22 00:26:47.408 clat (usec): min=1293, max=29176, avg=11857.49, stdev=3904.17 00:26:47.408 lat (usec): min=1307, max=29183, avg=11939.36, stdev=3922.30 00:26:47.408 clat percentiles (usec): 00:26:47.408 | 1.00th=[ 4293], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 8291], 00:26:47.408 | 30.00th=[ 9110], 40.00th=[10159], 50.00th=[11207], 60.00th=[12780], 00:26:47.408 | 70.00th=[15139], 80.00th=[15926], 90.00th=[16188], 95.00th=[16581], 00:26:47.408 | 99.00th=[20579], 99.50th=[23725], 99.90th=[28967], 99.95th=[29230], 00:26:47.408 | 99.99th=[29230] 00:26:47.408 bw ( KiB/s): min=16384, max=20480, per=27.98%, avg=18432.00, stdev=2896.31, samples=2 00:26:47.408 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:26:47.408 lat (msec) : 2=0.10%, 4=0.36%, 10=20.82%, 20=68.45%, 50=9.28% 00:26:47.408 lat (msec) : 100=0.99% 00:26:47.408 cpu : usr=6.51%, sys=6.70%, ctx=397, majf=0, minf=1 00:26:47.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:26:47.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:47.408 issued rwts: total=4558,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:47.408 00:26:47.408 Run status group 0 (all jobs): 00:26:47.408 READ: bw=61.1MiB/s (64.0MB/s), 13.9MiB/s-17.9MiB/s (14.5MB/s-18.8MB/s), io=63.8MiB (66.9MB), run=1006-1045msec 00:26:47.408 WRITE: bw=64.3MiB/s (67.5MB/s), 15.2MiB/s-18.4MiB/s (15.9MB/s-19.3MB/s), io=67.2MiB (70.5MB), run=1006-1045msec 00:26:47.408 00:26:47.408 Disk stats (read/write): 00:26:47.408 nvme0n1: ios=3634/3919, merge=0/0, ticks=24472/25666, in_queue=50138, util=85.07% 00:26:47.408 nvme0n2: ios=3094/3271, merge=0/0, ticks=47441/53620, in_queue=101061, util=86.89% 00:26:47.408 nvme0n3: ios=3089/3103, merge=0/0, ticks=41863/32401, in_queue=74264, util=90.72% 00:26:47.408 nvme0n4: ios=3641/4096, merge=0/0, ticks=51069/47662, in_queue=98731, util=96.65% 00:26:47.408 11:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:26:47.408 [global] 00:26:47.408 thread=1 00:26:47.408 invalidate=1 00:26:47.408 rw=randwrite 00:26:47.408 time_based=1 00:26:47.408 runtime=1 00:26:47.408 ioengine=libaio 00:26:47.408 direct=1 00:26:47.408 bs=4096 00:26:47.408 iodepth=128 00:26:47.408 norandommap=0 00:26:47.408 numjobs=1 00:26:47.408 00:26:47.408 verify_dump=1 00:26:47.408 verify_backlog=512 00:26:47.408 verify_state_save=0 00:26:47.408 do_verify=1 00:26:47.408 verify=crc32c-intel 00:26:47.408 [job0] 00:26:47.408 filename=/dev/nvme0n1 00:26:47.408 [job1] 00:26:47.408 filename=/dev/nvme0n2 00:26:47.408 [job2] 00:26:47.408 filename=/dev/nvme0n3 00:26:47.408 [job3] 00:26:47.408 filename=/dev/nvme0n4 00:26:47.408 Could not set queue depth (nvme0n1) 00:26:47.408 Could not set queue depth (nvme0n2) 00:26:47.408 Could not set queue depth (nvme0n3) 00:26:47.408 Could not set queue depth (nvme0n4) 00:26:47.667 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:47.667 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:47.668 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:47.668 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:47.668 fio-3.35 00:26:47.668 Starting 4 threads 00:26:49.044 00:26:49.044 job0: (groupid=0, jobs=1): err= 0: pid=3968889: Mon Jun 10 11:35:14 2024 00:26:49.044 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:26:49.044 slat (usec): min=3, max=19497, avg=186.97, stdev=954.03 00:26:49.044 clat (usec): min=9760, max=53669, avg=23815.79, stdev=10383.96 00:26:49.044 lat (usec): min=10348, max=57689, avg=24002.76, stdev=10420.43 00:26:49.044 clat percentiles (usec): 00:26:49.045 | 1.00th=[10945], 5.00th=[12518], 10.00th=[13173], 20.00th=[14222], 00:26:49.045 | 30.00th=[15401], 40.00th=[17433], 50.00th=[22414], 60.00th=[26346], 00:26:49.045 | 70.00th=[27132], 80.00th=[31851], 90.00th=[39584], 95.00th=[42730], 00:26:49.045 | 99.00th=[53216], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:26:49.045 | 99.99th=[53740] 00:26:49.045 write: IOPS=3014, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1006msec); 0 zone resets 00:26:49.045 slat (usec): min=4, max=15896, avg=163.57, stdev=884.82 00:26:49.045 clat (usec): min=2432, max=59565, avg=21667.24, stdev=10285.72 00:26:49.045 lat (usec): min=5680, max=59579, avg=21830.81, stdev=10320.51 00:26:49.045 clat percentiles (usec): 00:26:49.045 | 1.00th=[ 6259], 5.00th=[11863], 10.00th=[12125], 20.00th=[13566], 00:26:49.045 | 30.00th=[15008], 40.00th=[16581], 50.00th=[19268], 60.00th=[20841], 00:26:49.045 | 70.00th=[23725], 80.00th=[30016], 90.00th=[34866], 95.00th=[39584], 00:26:49.045 | 99.00th=[59507], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:26:49.045 | 99.99th=[59507] 00:26:49.045 bw ( KiB/s): min=10216, max=13024, per=19.90%, avg=11620.00, stdev=1985.56, samples=2 00:26:49.045 iops : min= 2554, max= 3256, avg=2905.00, stdev=496.39, samples=2 00:26:49.045 lat (msec) : 4=0.02%, 10=0.98%, 20=51.49%, 50=44.72%, 100=2.79% 00:26:49.045 cpu : usr=3.68%, sys=4.98%, ctx=317, majf=0, minf=1 00:26:49.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:49.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:49.045 issued rwts: total=2560,3033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:49.045 job1: (groupid=0, jobs=1): err= 0: pid=3968891: Mon Jun 10 11:35:14 2024 00:26:49.045 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:26:49.045 slat (usec): min=2, max=14176, avg=105.94, stdev=756.93 00:26:49.045 clat (usec): min=6743, max=56890, avg=14945.17, stdev=7262.87 00:26:49.045 lat (usec): min=6752, max=56907, avg=15051.11, stdev=7310.08 00:26:49.045 clat percentiles (usec): 00:26:49.045 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[10290], 20.00th=[11207], 00:26:49.045 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12649], 60.00th=[13173], 00:26:49.045 | 70.00th=[14222], 80.00th=[16581], 90.00th=[20579], 95.00th=[30540], 00:26:49.045 | 99.00th=[44827], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:26:49.045 | 99.99th=[56886] 00:26:49.045 write: IOPS=3804, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1011msec); 0 zone resets 00:26:49.045 slat (usec): min=2, max=23512, avg=152.40, stdev=1061.56 00:26:49.045 clat (usec): min=2700, max=64604, avg=19256.89, stdev=14452.76 00:26:49.045 lat (usec): min=2716, max=64630, avg=19409.29, stdev=14532.21 00:26:49.045 clat percentiles (usec): 00:26:49.045 | 1.00th=[ 4817], 5.00th=[ 6128], 10.00th=[ 7570], 20.00th=[ 8586], 00:26:49.045 | 30.00th=[10290], 40.00th=[11338], 50.00th=[12649], 60.00th=[14353], 00:26:49.045 | 70.00th=[21365], 80.00th=[31589], 90.00th=[44827], 95.00th=[51119], 00:26:49.045 | 99.00th=[61080], 99.50th=[61080], 99.90th=[64750], 99.95th=[64750], 00:26:49.045 | 99.99th=[64750] 00:26:49.045 bw ( KiB/s): min=10728, max=19024, per=25.48%, avg=14876.00, stdev=5866.16, samples=2 00:26:49.045 iops : min= 2682, max= 4756, avg=3719.00, stdev=1466.54, samples=2 00:26:49.045 lat (msec) : 4=0.16%, 10=17.60%, 20=60.94%, 50=17.82%, 100=3.47% 00:26:49.045 cpu : usr=4.75%, sys=5.35%, ctx=348, majf=0, minf=1 00:26:49.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:49.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:49.045 issued rwts: total=3584,3846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:49.045 job2: (groupid=0, jobs=1): err= 0: pid=3968892: Mon Jun 10 11:35:14 2024 00:26:49.045 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:26:49.045 slat (nsec): min=1890, max=12613k, avg=99035.71, stdev=747747.92 00:26:49.045 clat (usec): min=3641, max=50727, avg=14988.47, stdev=5046.05 00:26:49.045 lat (usec): min=4963, max=57499, avg=15087.50, stdev=5087.40 00:26:49.045 clat percentiles (usec): 00:26:49.045 | 1.00th=[ 5735], 5.00th=[ 9110], 10.00th=[10814], 20.00th=[11994], 00:26:49.045 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13960], 60.00th=[14746], 00:26:49.045 | 70.00th=[16188], 80.00th=[17171], 90.00th=[19792], 95.00th=[24511], 00:26:49.045 | 99.00th=[34866], 99.50th=[37487], 99.90th=[50070], 99.95th=[50070], 00:26:49.045 | 99.99th=[50594] 00:26:49.045 write: IOPS=4758, BW=18.6MiB/s (19.5MB/s)(18.8MiB/1010msec); 0 zone resets 00:26:49.045 slat (usec): min=2, max=10406, avg=79.80, stdev=597.47 00:26:49.045 clat (usec): min=467, max=50178, avg=12310.77, stdev=6198.94 00:26:49.045 lat (usec): min=714, max=52442, avg=12390.57, stdev=6217.88 00:26:49.045 clat percentiles (usec): 00:26:49.045 | 1.00th=[ 3851], 5.00th=[ 5932], 10.00th=[ 6915], 20.00th=[ 8160], 00:26:49.045 | 30.00th=[ 8848], 40.00th=[ 9896], 50.00th=[10814], 60.00th=[11863], 00:26:49.045 | 70.00th=[13304], 80.00th=[14877], 90.00th=[19268], 95.00th=[24511], 00:26:49.045 | 99.00th=[31851], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:26:49.045 | 99.99th=[50070] 00:26:49.045 bw ( KiB/s): min=16960, max=20472, per=32.06%, avg=18716.00, stdev=2483.36, samples=2 00:26:49.045 iops : min= 4240, max= 5118, avg=4679.00, stdev=620.84, samples=2 00:26:49.045 lat (usec) : 500=0.01%, 750=0.02% 00:26:49.045 lat (msec) : 2=0.04%, 4=0.47%, 10=23.91%, 20=66.70%, 50=8.40% 00:26:49.045 lat (msec) : 100=0.45% 00:26:49.045 cpu : usr=5.45%, sys=6.74%, ctx=290, majf=0, minf=1 00:26:49.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:26:49.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:49.045 issued rwts: total=4608,4806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:49.045 job3: (groupid=0, jobs=1): err= 0: pid=3968893: Mon Jun 10 11:35:14 2024 00:26:49.045 read: IOPS=2995, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1006msec) 00:26:49.045 slat (usec): min=2, max=31248, avg=150.27, stdev=1132.03 00:26:49.045 clat (usec): min=3067, max=88478, avg=24612.35, stdev=14539.81 00:26:49.045 lat (usec): min=3074, max=88488, avg=24762.62, stdev=14613.00 00:26:49.045 clat percentiles (usec): 00:26:49.045 | 1.00th=[ 5407], 5.00th=[ 9896], 10.00th=[11863], 20.00th=[14615], 00:26:49.045 | 30.00th=[15795], 40.00th=[18220], 50.00th=[21365], 60.00th=[23462], 00:26:49.045 | 70.00th=[27132], 80.00th=[33817], 90.00th=[40109], 95.00th=[54264], 00:26:49.045 | 99.00th=[84411], 99.50th=[84411], 99.90th=[88605], 99.95th=[88605], 00:26:49.045 | 99.99th=[88605] 00:26:49.045 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:26:49.045 slat (usec): min=3, max=12914, avg=117.89, stdev=768.04 00:26:49.045 clat (usec): min=1589, max=69554, avg=17432.68, stdev=7217.11 00:26:49.045 lat (usec): min=1602, max=69561, avg=17550.57, stdev=7237.66 00:26:49.045 clat percentiles (usec): 00:26:49.045 | 1.00th=[ 6456], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[12125], 00:26:49.045 | 30.00th=[13304], 40.00th=[15139], 50.00th=[16188], 60.00th=[17433], 00:26:49.045 | 70.00th=[19006], 80.00th=[22414], 90.00th=[26870], 95.00th=[30540], 00:26:49.045 | 99.00th=[42206], 99.50th=[56361], 99.90th=[66323], 99.95th=[66323], 00:26:49.045 | 99.99th=[69731] 00:26:49.045 bw ( KiB/s): min= 8192, max=16384, per=21.05%, avg=12288.00, stdev=5792.62, samples=2 00:26:49.045 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:26:49.045 lat (msec) : 2=0.03%, 4=0.25%, 10=7.53%, 20=50.21%, 50=39.13% 00:26:49.045 lat (msec) : 100=2.86% 00:26:49.045 cpu : usr=2.29%, sys=5.67%, ctx=253, majf=0, minf=1 00:26:49.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:49.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:49.045 issued rwts: total=3013,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:49.045 00:26:49.045 Run status group 0 (all jobs): 00:26:49.045 READ: bw=53.2MiB/s (55.8MB/s), 9.94MiB/s-17.8MiB/s (10.4MB/s-18.7MB/s), io=53.8MiB (56.4MB), run=1006-1011msec 00:26:49.045 WRITE: bw=57.0MiB/s (59.8MB/s), 11.8MiB/s-18.6MiB/s (12.3MB/s-19.5MB/s), io=57.6MiB (60.4MB), run=1006-1011msec 00:26:49.045 00:26:49.045 Disk stats (read/write): 00:26:49.045 nvme0n1: ios=2097/2555, merge=0/0, ticks=11760/13177, in_queue=24937, util=88.37% 00:26:49.045 nvme0n2: ios=2589/2871, merge=0/0, ticks=26256/31965, in_queue=58221, util=92.15% 00:26:49.045 nvme0n3: ios=3607/3614, merge=0/0, ticks=50944/40178, in_queue=91122, util=95.29% 00:26:49.045 nvme0n4: ios=2428/2560, merge=0/0, ticks=30532/27822, in_queue=58354, util=95.75% 00:26:49.045 11:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:26:49.045 11:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3969156 00:26:49.045 11:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:26:49.045 11:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:26:49.045 [global] 00:26:49.045 thread=1 00:26:49.045 invalidate=1 00:26:49.045 rw=read 00:26:49.045 time_based=1 00:26:49.045 runtime=10 00:26:49.045 ioengine=libaio 00:26:49.045 direct=1 00:26:49.045 bs=4096 00:26:49.045 iodepth=1 00:26:49.045 norandommap=1 00:26:49.045 numjobs=1 00:26:49.045 00:26:49.045 [job0] 00:26:49.045 filename=/dev/nvme0n1 00:26:49.045 [job1] 00:26:49.045 filename=/dev/nvme0n2 00:26:49.045 [job2] 00:26:49.045 filename=/dev/nvme0n3 00:26:49.045 [job3] 00:26:49.045 filename=/dev/nvme0n4 00:26:49.324 Could not set queue depth (nvme0n1) 00:26:49.324 Could not set queue depth (nvme0n2) 00:26:49.324 Could not set queue depth (nvme0n3) 00:26:49.324 Could not set queue depth (nvme0n4) 00:26:49.585 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:49.585 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:49.585 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:49.585 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:49.585 fio-3.35 00:26:49.585 Starting 4 threads 00:26:52.107 11:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:26:52.364 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=21413888, buflen=4096 00:26:52.364 fio: pid=3969321, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:26:52.364 11:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:26:52.622 11:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:52.622 11:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:26:52.622 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=569344, buflen=4096 00:26:52.622 fio: pid=3969320, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:26:52.879 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=26152960, buflen=4096 00:26:52.879 fio: pid=3969316, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:26:52.879 11:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:52.879 11:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:26:53.137 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=761856, buflen=4096 00:26:53.137 fio: pid=3969317, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:26:53.137 11:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:53.137 11:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:26:53.137 00:26:53.137 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3969316: Mon Jun 10 11:35:18 2024 00:26:53.137 read: IOPS=2033, BW=8134KiB/s (8329kB/s)(24.9MiB/3140msec) 00:26:53.137 slat (usec): min=8, max=33717, avg=23.46, stdev=602.42 00:26:53.137 clat (usec): min=326, max=3867, avg=463.15, stdev=51.42 00:26:53.137 lat (usec): min=335, max=34242, avg=486.62, stdev=606.79 00:26:53.137 clat percentiles (usec): 00:26:53.137 | 1.00th=[ 388], 5.00th=[ 424], 10.00th=[ 437], 20.00th=[ 445], 00:26:53.137 | 30.00th=[ 453], 40.00th=[ 457], 50.00th=[ 465], 60.00th=[ 469], 00:26:53.137 | 70.00th=[ 474], 80.00th=[ 478], 90.00th=[ 486], 95.00th=[ 494], 00:26:53.137 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 676], 99.95th=[ 898], 00:26:53.137 | 99.99th=[ 3884] 00:26:53.137 bw ( KiB/s): min= 7502, max= 8416, per=58.21%, avg=8205.00, stdev=354.34, samples=6 00:26:53.137 iops : min= 1875, max= 2104, avg=2051.17, stdev=88.78, samples=6 00:26:53.137 lat (usec) : 500=96.46%, 750=3.45%, 1000=0.03% 00:26:53.137 lat (msec) : 2=0.03%, 4=0.02% 00:26:53.137 cpu : usr=1.34%, sys=3.63%, ctx=6391, majf=0, minf=1 00:26:53.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:53.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.137 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.137 issued rwts: total=6386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:53.137 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3969317: Mon Jun 10 11:35:18 2024 00:26:53.137 read: IOPS=55, BW=220KiB/s (225kB/s)(744KiB/3388msec) 00:26:53.137 slat (usec): min=8, max=9730, avg=119.09, stdev=998.78 00:26:53.137 clat (usec): min=319, max=42055, avg=17977.04, stdev=20225.74 00:26:53.137 lat (usec): min=327, max=51009, avg=18096.40, stdev=20377.06 00:26:53.137 clat percentiles (usec): 00:26:53.137 | 1.00th=[ 338], 5.00th=[ 367], 10.00th=[ 388], 20.00th=[ 429], 00:26:53.137 | 30.00th=[ 482], 40.00th=[ 519], 50.00th=[ 545], 60.00th=[40633], 00:26:53.137 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:26:53.137 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:26:53.137 | 99.99th=[42206] 00:26:53.137 bw ( KiB/s): min= 96, max= 664, per=1.66%, avg=235.00, stdev=231.72, samples=6 00:26:53.137 iops : min= 24, max= 166, avg=58.67, stdev=57.99, samples=6 00:26:53.137 lat (usec) : 500=35.29%, 750=21.39% 00:26:53.137 lat (msec) : 50=42.78% 00:26:53.137 cpu : usr=0.00%, sys=0.15%, ctx=191, majf=0, minf=1 00:26:53.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:53.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.137 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.137 issued rwts: total=187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:53.137 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3969320: Mon Jun 10 11:35:18 2024 00:26:53.137 read: IOPS=48, BW=191KiB/s (196kB/s)(556KiB/2912msec) 00:26:53.137 slat (nsec): min=8408, max=89252, avg=13455.75, stdev=8544.96 00:26:53.137 clat (usec): min=436, max=50756, avg=20779.85, stdev=20548.79 00:26:53.137 lat (usec): min=445, max=50771, avg=20793.23, stdev=20551.78 00:26:53.137 clat percentiles (usec): 00:26:53.137 | 1.00th=[ 441], 5.00th=[ 449], 10.00th=[ 457], 20.00th=[ 461], 00:26:53.137 | 30.00th=[ 469], 40.00th=[ 478], 50.00th=[ 660], 60.00th=[41157], 00:26:53.137 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:26:53.137 | 99.00th=[42206], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:26:53.137 | 99.99th=[50594] 00:26:53.137 bw ( KiB/s): min= 96, max= 104, per=0.69%, avg=97.60, stdev= 3.58, samples=5 00:26:53.137 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:26:53.137 lat (usec) : 500=47.86%, 750=2.14% 00:26:53.137 lat (msec) : 50=48.57%, 100=0.71% 00:26:53.137 cpu : usr=0.00%, sys=0.10%, ctx=142, majf=0, minf=1 00:26:53.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:53.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.137 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.137 issued rwts: total=140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:53.137 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3969321: Mon Jun 10 11:35:18 2024 00:26:53.137 read: IOPS=1972, BW=7888KiB/s (8078kB/s)(20.4MiB/2651msec) 00:26:53.137 slat (usec): min=8, max=112, avg= 9.51, stdev= 3.98 00:26:53.137 clat (usec): min=324, max=2454, avg=490.29, stdev=46.35 00:26:53.137 lat (usec): min=333, max=2465, avg=499.79, stdev=46.63 00:26:53.137 clat percentiles (usec): 00:26:53.137 | 1.00th=[ 375], 5.00th=[ 453], 10.00th=[ 469], 20.00th=[ 478], 00:26:53.137 | 30.00th=[ 482], 40.00th=[ 486], 50.00th=[ 490], 60.00th=[ 494], 00:26:53.137 | 70.00th=[ 498], 80.00th=[ 502], 90.00th=[ 510], 95.00th=[ 523], 00:26:53.137 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 725], 99.95th=[ 1663], 00:26:53.137 | 99.99th=[ 2442] 00:26:53.137 bw ( KiB/s): min= 7896, max= 8072, per=56.70%, avg=7992.00, stdev=66.69, samples=5 00:26:53.137 iops : min= 1974, max= 2018, avg=1998.00, stdev=16.67, samples=5 00:26:53.137 lat (usec) : 500=73.25%, 750=26.64%, 1000=0.04% 00:26:53.137 lat (msec) : 2=0.04%, 4=0.02% 00:26:53.137 cpu : usr=0.60%, sys=2.45%, ctx=5229, majf=0, minf=2 00:26:53.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:53.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.138 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.138 issued rwts: total=5229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:53.138 00:26:53.138 Run status group 0 (all jobs): 00:26:53.138 READ: bw=13.8MiB/s (14.4MB/s), 191KiB/s-8134KiB/s (196kB/s-8329kB/s), io=46.6MiB (48.9MB), run=2651-3388msec 00:26:53.138 00:26:53.138 Disk stats (read/write): 00:26:53.138 nvme0n1: ios=6287/0, merge=0/0, ticks=2834/0, in_queue=2834, util=92.23% 00:26:53.138 nvme0n2: ios=184/0, merge=0/0, ticks=3263/0, in_queue=3263, util=95.25% 00:26:53.138 nvme0n3: ios=137/0, merge=0/0, ticks=2809/0, in_queue=2809, util=96.35% 00:26:53.138 nvme0n4: ios=5139/0, merge=0/0, ticks=2496/0, in_queue=2496, util=96.41% 00:26:53.395 11:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:53.395 11:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:26:53.652 11:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:53.652 11:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:26:53.910 11:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:53.910 11:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:26:53.910 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:53.910 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:26:54.210 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:26:54.210 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3969156 00:26:54.210 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:26:54.210 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:54.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:54.483 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:54.483 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:26:54.483 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:26:54.483 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:54.483 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:26:54.483 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:54.483 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:26:54.483 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:26:54.483 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:26:54.483 nvmf hotplug test: fio failed as expected 00:26:54.483 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:54.740 rmmod nvme_tcp 00:26:54.740 rmmod nvme_fabrics 00:26:54.740 rmmod nvme_keyring 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3965942 ']' 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3965942 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 3965942 ']' 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 3965942 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3965942 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3965942' 00:26:54.740 killing process with pid 3965942 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 3965942 00:26:54.740 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 3965942 00:26:54.998 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:54.998 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:54.998 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:54.998 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:54.998 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:54.998 11:35:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.998 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:54.998 11:35:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.528 11:35:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:57.528 00:26:57.528 real 0m31.904s 00:26:57.528 user 2m25.108s 00:26:57.528 sys 0m11.810s 00:26:57.528 11:35:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:57.528 11:35:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:57.528 ************************************ 00:26:57.528 END TEST nvmf_fio_target 00:26:57.528 ************************************ 00:26:57.528 11:35:22 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:26:57.528 11:35:22 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:57.528 11:35:22 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:57.528 11:35:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:57.528 ************************************ 00:26:57.528 START TEST nvmf_bdevio 00:26:57.528 ************************************ 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:26:57.528 * Looking for test storage... 00:26:57.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:26:57.528 11:35:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:05.641 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:05.641 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:05.641 Found net devices under 0000:af:00.0: cvl_0_0 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:05.641 Found net devices under 0000:af:00.1: cvl_0_1 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.641 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.900 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.900 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.900 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:05.900 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.900 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.900 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.900 11:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:05.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:27:05.900 00:27:05.900 --- 10.0.0.2 ping statistics --- 00:27:05.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.900 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:27:05.900 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:27:06.158 00:27:06.158 --- 10.0.0.1 ping statistics --- 00:27:06.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.158 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3974720 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3974720 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 3974720 ']' 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:06.158 11:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:06.158 [2024-06-10 11:35:31.114812] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:27:06.158 [2024-06-10 11:35:31.114871] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.158 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.158 [2024-06-10 11:35:31.241624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:06.415 [2024-06-10 11:35:31.329340] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.415 [2024-06-10 11:35:31.329384] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.415 [2024-06-10 11:35:31.329398] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.415 [2024-06-10 11:35:31.329414] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.415 [2024-06-10 11:35:31.329424] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.415 [2024-06-10 11:35:31.329548] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:27:06.415 [2024-06-10 11:35:31.329659] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:27:06.415 [2024-06-10 11:35:31.329768] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:06.415 [2024-06-10 11:35:31.329768] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:27:06.979 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:06.979 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:27:06.979 11:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:06.979 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:06.979 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:06.979 11:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.979 11:35:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:06.979 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.979 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:06.979 [2024-06-10 11:35:32.072725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.979 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:06.979 11:35:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:06.979 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:06.979 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:07.237 Malloc0 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:07.237 [2024-06-10 11:35:32.128614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:07.237 { 00:27:07.237 "params": { 00:27:07.237 "name": "Nvme$subsystem", 00:27:07.237 "trtype": "$TEST_TRANSPORT", 00:27:07.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.237 "adrfam": "ipv4", 00:27:07.237 "trsvcid": "$NVMF_PORT", 00:27:07.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.237 "hdgst": ${hdgst:-false}, 00:27:07.237 "ddgst": ${ddgst:-false} 00:27:07.237 }, 00:27:07.237 "method": "bdev_nvme_attach_controller" 00:27:07.237 } 00:27:07.237 EOF 00:27:07.237 )") 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:27:07.237 11:35:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:07.237 "params": { 00:27:07.237 "name": "Nvme1", 00:27:07.237 "trtype": "tcp", 00:27:07.237 "traddr": "10.0.0.2", 00:27:07.237 "adrfam": "ipv4", 00:27:07.237 "trsvcid": "4420", 00:27:07.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:07.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:07.237 "hdgst": false, 00:27:07.237 "ddgst": false 00:27:07.237 }, 00:27:07.237 "method": "bdev_nvme_attach_controller" 00:27:07.237 }' 00:27:07.237 [2024-06-10 11:35:32.183968] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:27:07.237 [2024-06-10 11:35:32.184030] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3974866 ] 00:27:07.237 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.237 [2024-06-10 11:35:32.303880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:07.493 [2024-06-10 11:35:32.389030] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.493 [2024-06-10 11:35:32.389124] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.493 [2024-06-10 11:35:32.389125] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.750 I/O targets: 00:27:07.750 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:27:07.750 00:27:07.750 00:27:07.750 CUnit - A unit testing framework for C - Version 2.1-3 00:27:07.750 http://cunit.sourceforge.net/ 00:27:07.750 00:27:07.750 00:27:07.750 Suite: bdevio tests on: Nvme1n1 00:27:07.750 Test: blockdev write read block ...passed 00:27:07.750 Test: blockdev write zeroes read block ...passed 00:27:07.750 Test: blockdev write zeroes read no split ...passed 00:27:07.750 Test: blockdev write zeroes read split ...passed 00:27:08.007 Test: blockdev write zeroes read split partial ...passed 00:27:08.007 Test: blockdev reset ...[2024-06-10 11:35:32.881558] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.007 [2024-06-10 11:35:32.881636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cf5b0 (9): Bad file descriptor 00:27:08.007 [2024-06-10 11:35:32.897081] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:08.007 passed 00:27:08.007 Test: blockdev write read 8 blocks ...passed 00:27:08.007 Test: blockdev write read size > 128k ...passed 00:27:08.007 Test: blockdev write read invalid size ...passed 00:27:08.007 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:08.007 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:08.007 Test: blockdev write read max offset ...passed 00:27:08.007 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:08.007 Test: blockdev writev readv 8 blocks ...passed 00:27:08.007 Test: blockdev writev readv 30 x 1block ...passed 00:27:08.007 Test: blockdev writev readv block ...passed 00:27:08.264 Test: blockdev writev readv size > 128k ...passed 00:27:08.264 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:08.264 Test: blockdev comparev and writev ...[2024-06-10 11:35:33.114264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:08.264 [2024-06-10 11:35:33.114293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.264 [2024-06-10 11:35:33.114309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:08.264 [2024-06-10 11:35:33.114319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.264 [2024-06-10 11:35:33.114676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:08.264 [2024-06-10 11:35:33.114689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:08.264 [2024-06-10 11:35:33.114703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:08.264 [2024-06-10 11:35:33.114718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:08.264 [2024-06-10 11:35:33.115088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:08.264 [2024-06-10 11:35:33.115102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:08.264 [2024-06-10 11:35:33.115116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:08.264 [2024-06-10 11:35:33.115126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:08.264 [2024-06-10 11:35:33.115491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:08.264 [2024-06-10 11:35:33.115505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:08.264 [2024-06-10 11:35:33.115519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:27:08.264 [2024-06-10 11:35:33.115529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:08.264 passed 00:27:08.264 Test: blockdev nvme passthru rw ...passed 00:27:08.264 Test: blockdev nvme passthru vendor specific ...[2024-06-10 11:35:33.199124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:08.264 [2024-06-10 11:35:33.199142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:08.264 [2024-06-10 11:35:33.199338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:08.264 [2024-06-10 11:35:33.199350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:08.264 [2024-06-10 11:35:33.199543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:08.264 [2024-06-10 11:35:33.199554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:08.264 [2024-06-10 11:35:33.199753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:08.264 [2024-06-10 11:35:33.199765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:08.264 passed 00:27:08.264 Test: blockdev nvme admin passthru ...passed 00:27:08.264 Test: blockdev copy ...passed 00:27:08.264 00:27:08.264 Run Summary: Type Total Ran Passed Failed Inactive 00:27:08.264 suites 1 1 n/a 0 0 00:27:08.264 tests 23 23 23 0 0 00:27:08.264 asserts 152 152 152 0 n/a 00:27:08.264 00:27:08.264 Elapsed time = 1.171 seconds 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:08.522 rmmod nvme_tcp 00:27:08.522 rmmod nvme_fabrics 00:27:08.522 rmmod nvme_keyring 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3974720 ']' 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3974720 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 3974720 ']' 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 3974720 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3974720 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3974720' 00:27:08.522 killing process with pid 3974720 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 3974720 00:27:08.522 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 3974720 00:27:08.797 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:08.797 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:08.797 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:08.797 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:08.797 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:08.797 11:35:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.797 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:08.797 11:35:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.328 11:35:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:11.328 00:27:11.328 real 0m13.700s 00:27:11.328 user 0m14.506s 00:27:11.328 sys 0m7.484s 00:27:11.328 11:35:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:11.328 11:35:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:27:11.328 ************************************ 00:27:11.328 END TEST nvmf_bdevio 00:27:11.328 ************************************ 00:27:11.328 11:35:35 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:27:11.328 11:35:35 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:11.328 11:35:35 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:11.328 11:35:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:11.328 ************************************ 00:27:11.328 START TEST nvmf_auth_target 00:27:11.328 ************************************ 00:27:11.328 11:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:27:11.328 * Looking for test storage... 00:27:11.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:27:11.328 11:35:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:11.329 11:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.442 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:19.443 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:19.443 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:19.443 Found net devices under 0000:af:00.0: cvl_0_0 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:19.443 Found net devices under 0000:af:00.1: cvl_0_1 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.443 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:19.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:27:19.702 00:27:19.702 --- 10.0.0.2 ping statistics --- 00:27:19.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.702 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:27:19.702 00:27:19.702 --- 10.0.0.1 ping statistics --- 00:27:19.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.702 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3979565 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3979565 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3979565 ']' 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:19.702 11:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:20.636 11:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:20.636 11:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:27:20.637 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:20.637 11:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:20.637 11:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3979840 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a4af073f97a4031a79bfba767b4cfebd4163f698c5b07924 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yzU 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a4af073f97a4031a79bfba767b4cfebd4163f698c5b07924 0 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a4af073f97a4031a79bfba767b4cfebd4163f698c5b07924 0 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a4af073f97a4031a79bfba767b4cfebd4163f698c5b07924 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yzU 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yzU 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.yzU 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4dce772a1d4b73f1fe16411570ead82212ab2ee7f6207049ad7d3a7f7f0a0da4 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.D0U 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4dce772a1d4b73f1fe16411570ead82212ab2ee7f6207049ad7d3a7f7f0a0da4 3 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4dce772a1d4b73f1fe16411570ead82212ab2ee7f6207049ad7d3a7f7f0a0da4 3 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4dce772a1d4b73f1fe16411570ead82212ab2ee7f6207049ad7d3a7f7f0a0da4 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.D0U 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.D0U 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.D0U 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=42657341a32d57a5ec328843f8df13ba 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LpM 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 42657341a32d57a5ec328843f8df13ba 1 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 42657341a32d57a5ec328843f8df13ba 1 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=42657341a32d57a5ec328843f8df13ba 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:27:20.895 11:35:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LpM 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LpM 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.LpM 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9943831c154ebff4e612e91bdc69bafa4a529cdc1bc806f0 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IMB 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9943831c154ebff4e612e91bdc69bafa4a529cdc1bc806f0 2 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9943831c154ebff4e612e91bdc69bafa4a529cdc1bc806f0 2 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9943831c154ebff4e612e91bdc69bafa4a529cdc1bc806f0 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IMB 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IMB 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.IMB 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5f8c08ab6b59045c49aba56e9a464d5be7bbe6dff2194eb2 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wPc 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5f8c08ab6b59045c49aba56e9a464d5be7bbe6dff2194eb2 2 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5f8c08ab6b59045c49aba56e9a464d5be7bbe6dff2194eb2 2 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5f8c08ab6b59045c49aba56e9a464d5be7bbe6dff2194eb2 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wPc 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wPc 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.wPc 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ff36329befa0adb371ab0de90b52510d 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.q0f 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ff36329befa0adb371ab0de90b52510d 1 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ff36329befa0adb371ab0de90b52510d 1 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ff36329befa0adb371ab0de90b52510d 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.q0f 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.q0f 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.q0f 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7d5382cbef1d5c9d2557b1c6d683a4d0c86985005dc3a729315be10815b7e64b 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uyW 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7d5382cbef1d5c9d2557b1c6d683a4d0c86985005dc3a729315be10815b7e64b 3 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7d5382cbef1d5c9d2557b1c6d683a4d0c86985005dc3a729315be10815b7e64b 3 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7d5382cbef1d5c9d2557b1c6d683a4d0c86985005dc3a729315be10815b7e64b 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:27:21.154 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uyW 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uyW 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.uyW 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3979565 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3979565 ']' 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3979840 /var/tmp/host.sock 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 3979840 ']' 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:27:21.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:21.412 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:21.670 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:21.670 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:27:21.670 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:27:21.670 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.670 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:21.670 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.670 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:27:21.670 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yzU 00:27:21.670 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.670 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:21.670 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.670 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.yzU 00:27:21.670 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.yzU 00:27:21.927 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.D0U ]] 00:27:21.927 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D0U 00:27:21.927 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.927 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:21.927 11:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.927 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D0U 00:27:21.927 11:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D0U 00:27:22.185 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:27:22.185 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.LpM 00:27:22.185 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.185 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:22.185 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.185 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.LpM 00:27:22.185 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.LpM 00:27:22.443 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.IMB ]] 00:27:22.443 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IMB 00:27:22.443 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.443 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:22.443 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.443 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IMB 00:27:22.443 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IMB 00:27:22.700 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:27:22.700 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wPc 00:27:22.700 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.700 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:22.700 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.700 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.wPc 00:27:22.700 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.wPc 00:27:22.958 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.q0f ]] 00:27:22.958 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.q0f 00:27:22.958 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.958 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:22.958 11:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.958 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.q0f 00:27:22.958 11:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.q0f 00:27:23.216 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:27:23.216 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.uyW 00:27:23.216 11:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.216 11:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:23.216 11:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.216 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.uyW 00:27:23.216 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.uyW 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.474 11:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:23.732 11:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.732 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.732 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.990 00:27:23.990 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:23.990 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:23.990 11:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:24.248 { 00:27:24.248 "cntlid": 1, 00:27:24.248 "qid": 0, 00:27:24.248 "state": "enabled", 00:27:24.248 "listen_address": { 00:27:24.248 "trtype": "TCP", 00:27:24.248 "adrfam": "IPv4", 00:27:24.248 "traddr": "10.0.0.2", 00:27:24.248 "trsvcid": "4420" 00:27:24.248 }, 00:27:24.248 "peer_address": { 00:27:24.248 "trtype": "TCP", 00:27:24.248 "adrfam": "IPv4", 00:27:24.248 "traddr": "10.0.0.1", 00:27:24.248 "trsvcid": "33852" 00:27:24.248 }, 00:27:24.248 "auth": { 00:27:24.248 "state": "completed", 00:27:24.248 "digest": "sha256", 00:27:24.248 "dhgroup": "null" 00:27:24.248 } 00:27:24.248 } 00:27:24.248 ]' 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:24.248 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:24.506 11:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:25.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:25.439 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.440 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.440 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:25.440 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.440 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.440 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.697 00:27:25.697 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:25.697 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:25.698 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:25.955 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.955 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:25.955 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.955 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:25.955 11:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.955 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:25.955 { 00:27:25.955 "cntlid": 3, 00:27:25.955 "qid": 0, 00:27:25.955 "state": "enabled", 00:27:25.955 "listen_address": { 00:27:25.955 "trtype": "TCP", 00:27:25.955 "adrfam": "IPv4", 00:27:25.955 "traddr": "10.0.0.2", 00:27:25.955 "trsvcid": "4420" 00:27:25.955 }, 00:27:25.955 "peer_address": { 00:27:25.955 "trtype": "TCP", 00:27:25.955 "adrfam": "IPv4", 00:27:25.955 "traddr": "10.0.0.1", 00:27:25.955 "trsvcid": "33868" 00:27:25.955 }, 00:27:25.955 "auth": { 00:27:25.955 "state": "completed", 00:27:25.955 "digest": "sha256", 00:27:25.955 "dhgroup": "null" 00:27:25.955 } 00:27:25.955 } 00:27:25.955 ]' 00:27:25.955 11:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:25.955 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:25.955 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:26.211 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:26.211 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:26.211 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:26.211 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:26.211 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:26.483 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:27:27.063 11:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:27.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:27.063 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:27.063 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.063 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:27.063 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.063 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:27.063 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:27.063 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:27.321 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:27:27.321 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:27.321 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:27.321 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:27.321 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:27.321 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:27.321 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.321 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.321 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:27.321 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.321 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.321 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.579 00:27:27.579 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:27.579 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:27.579 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:27.837 { 00:27:27.837 "cntlid": 5, 00:27:27.837 "qid": 0, 00:27:27.837 "state": "enabled", 00:27:27.837 "listen_address": { 00:27:27.837 "trtype": "TCP", 00:27:27.837 "adrfam": "IPv4", 00:27:27.837 "traddr": "10.0.0.2", 00:27:27.837 "trsvcid": "4420" 00:27:27.837 }, 00:27:27.837 "peer_address": { 00:27:27.837 "trtype": "TCP", 00:27:27.837 "adrfam": "IPv4", 00:27:27.837 "traddr": "10.0.0.1", 00:27:27.837 "trsvcid": "33890" 00:27:27.837 }, 00:27:27.837 "auth": { 00:27:27.837 "state": "completed", 00:27:27.837 "digest": "sha256", 00:27:27.837 "dhgroup": "null" 00:27:27.837 } 00:27:27.837 } 00:27:27.837 ]' 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:27.837 11:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:28.095 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:27:29.029 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:29.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:29.029 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:29.029 11:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.029 11:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.029 11:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.029 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:29.029 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:29.029 11:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:27:29.029 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:27:29.029 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:29.029 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:29.029 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:27:29.029 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:29.029 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:29.029 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:27:29.029 11:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.029 11:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.029 11:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.029 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:29.029 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:29.287 00:27:29.287 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:29.287 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:29.287 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:29.545 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.545 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:29.545 11:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.545 11:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:29.545 11:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.545 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:29.545 { 00:27:29.545 "cntlid": 7, 00:27:29.545 "qid": 0, 00:27:29.545 "state": "enabled", 00:27:29.545 "listen_address": { 00:27:29.545 "trtype": "TCP", 00:27:29.545 "adrfam": "IPv4", 00:27:29.545 "traddr": "10.0.0.2", 00:27:29.545 "trsvcid": "4420" 00:27:29.545 }, 00:27:29.545 "peer_address": { 00:27:29.545 "trtype": "TCP", 00:27:29.545 "adrfam": "IPv4", 00:27:29.545 "traddr": "10.0.0.1", 00:27:29.545 "trsvcid": "33910" 00:27:29.545 }, 00:27:29.545 "auth": { 00:27:29.545 "state": "completed", 00:27:29.545 "digest": "sha256", 00:27:29.545 "dhgroup": "null" 00:27:29.545 } 00:27:29.545 } 00:27:29.545 ]' 00:27:29.545 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:29.545 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:29.545 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:29.803 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:27:29.803 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:29.803 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:29.803 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:29.803 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:30.061 11:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:27:30.627 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:30.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:30.627 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:30.627 11:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.627 11:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:30.627 11:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.627 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.627 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:30.627 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.627 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.885 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:27:30.885 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:30.885 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:30.885 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:30.885 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:30.885 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:30.885 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.885 11:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.885 11:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:30.885 11:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.885 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.885 11:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.144 00:27:31.144 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:31.144 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:31.144 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:31.402 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.402 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:31.402 11:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.402 11:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:31.402 11:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.402 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:31.402 { 00:27:31.402 "cntlid": 9, 00:27:31.402 "qid": 0, 00:27:31.402 "state": "enabled", 00:27:31.402 "listen_address": { 00:27:31.402 "trtype": "TCP", 00:27:31.402 "adrfam": "IPv4", 00:27:31.402 "traddr": "10.0.0.2", 00:27:31.402 "trsvcid": "4420" 00:27:31.402 }, 00:27:31.402 "peer_address": { 00:27:31.402 "trtype": "TCP", 00:27:31.402 "adrfam": "IPv4", 00:27:31.402 "traddr": "10.0.0.1", 00:27:31.402 "trsvcid": "33934" 00:27:31.402 }, 00:27:31.402 "auth": { 00:27:31.402 "state": "completed", 00:27:31.402 "digest": "sha256", 00:27:31.402 "dhgroup": "ffdhe2048" 00:27:31.402 } 00:27:31.402 } 00:27:31.402 ]' 00:27:31.402 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:31.402 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:31.659 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:31.659 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:31.659 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:31.659 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:31.659 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:31.659 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:31.917 11:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:27:32.483 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:32.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:32.483 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:32.483 11:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.483 11:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:32.483 11:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.483 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:32.483 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:32.483 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:32.741 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:27:32.741 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:32.741 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:32.741 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:32.741 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:32.741 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:32.741 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.741 11:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.741 11:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:32.741 11:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.742 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.742 11:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.999 00:27:32.999 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:32.999 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:32.999 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:33.257 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.257 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:33.257 11:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.257 11:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:33.257 11:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.257 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:33.257 { 00:27:33.257 "cntlid": 11, 00:27:33.258 "qid": 0, 00:27:33.258 "state": "enabled", 00:27:33.258 "listen_address": { 00:27:33.258 "trtype": "TCP", 00:27:33.258 "adrfam": "IPv4", 00:27:33.258 "traddr": "10.0.0.2", 00:27:33.258 "trsvcid": "4420" 00:27:33.258 }, 00:27:33.258 "peer_address": { 00:27:33.258 "trtype": "TCP", 00:27:33.258 "adrfam": "IPv4", 00:27:33.258 "traddr": "10.0.0.1", 00:27:33.258 "trsvcid": "33972" 00:27:33.258 }, 00:27:33.258 "auth": { 00:27:33.258 "state": "completed", 00:27:33.258 "digest": "sha256", 00:27:33.258 "dhgroup": "ffdhe2048" 00:27:33.258 } 00:27:33.258 } 00:27:33.258 ]' 00:27:33.258 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:33.258 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:33.258 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:33.258 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:33.258 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:33.515 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:33.515 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:33.515 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:33.515 11:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:27:34.449 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:34.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:34.449 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:34.449 11:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.449 11:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:34.449 11:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.449 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:34.449 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:34.449 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:34.707 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:27:34.707 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:34.707 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:34.707 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:34.707 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:34.707 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:34.707 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.707 11:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.707 11:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:34.707 11:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.707 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.707 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.965 00:27:34.965 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:34.965 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:34.965 11:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:35.224 { 00:27:35.224 "cntlid": 13, 00:27:35.224 "qid": 0, 00:27:35.224 "state": "enabled", 00:27:35.224 "listen_address": { 00:27:35.224 "trtype": "TCP", 00:27:35.224 "adrfam": "IPv4", 00:27:35.224 "traddr": "10.0.0.2", 00:27:35.224 "trsvcid": "4420" 00:27:35.224 }, 00:27:35.224 "peer_address": { 00:27:35.224 "trtype": "TCP", 00:27:35.224 "adrfam": "IPv4", 00:27:35.224 "traddr": "10.0.0.1", 00:27:35.224 "trsvcid": "35174" 00:27:35.224 }, 00:27:35.224 "auth": { 00:27:35.224 "state": "completed", 00:27:35.224 "digest": "sha256", 00:27:35.224 "dhgroup": "ffdhe2048" 00:27:35.224 } 00:27:35.224 } 00:27:35.224 ]' 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:35.224 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:35.482 11:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:27:36.416 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:36.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:36.417 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:36.675 00:27:36.675 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:36.675 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:36.675 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:36.932 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.932 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:36.932 11:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.932 11:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:36.932 11:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.933 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:36.933 { 00:27:36.933 "cntlid": 15, 00:27:36.933 "qid": 0, 00:27:36.933 "state": "enabled", 00:27:36.933 "listen_address": { 00:27:36.933 "trtype": "TCP", 00:27:36.933 "adrfam": "IPv4", 00:27:36.933 "traddr": "10.0.0.2", 00:27:36.933 "trsvcid": "4420" 00:27:36.933 }, 00:27:36.933 "peer_address": { 00:27:36.933 "trtype": "TCP", 00:27:36.933 "adrfam": "IPv4", 00:27:36.933 "traddr": "10.0.0.1", 00:27:36.933 "trsvcid": "35206" 00:27:36.933 }, 00:27:36.933 "auth": { 00:27:36.933 "state": "completed", 00:27:36.933 "digest": "sha256", 00:27:36.933 "dhgroup": "ffdhe2048" 00:27:36.933 } 00:27:36.933 } 00:27:36.933 ]' 00:27:36.933 11:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:36.933 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:37.192 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:37.192 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:27:37.192 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:37.192 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:37.192 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:37.192 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:37.450 11:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:27:38.016 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:38.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:38.016 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:38.016 11:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.016 11:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:38.016 11:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.016 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:38.016 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:38.016 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:38.016 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:38.274 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:27:38.274 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:38.274 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:38.274 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:27:38.274 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:38.274 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:38.274 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.274 11:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.274 11:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:38.274 11:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.274 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.274 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.532 00:27:38.532 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:38.532 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:38.532 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:38.790 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.790 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:38.790 11:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.790 11:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:38.790 11:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.790 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:38.790 { 00:27:38.790 "cntlid": 17, 00:27:38.790 "qid": 0, 00:27:38.790 "state": "enabled", 00:27:38.790 "listen_address": { 00:27:38.790 "trtype": "TCP", 00:27:38.790 "adrfam": "IPv4", 00:27:38.790 "traddr": "10.0.0.2", 00:27:38.790 "trsvcid": "4420" 00:27:38.790 }, 00:27:38.790 "peer_address": { 00:27:38.790 "trtype": "TCP", 00:27:38.790 "adrfam": "IPv4", 00:27:38.790 "traddr": "10.0.0.1", 00:27:38.790 "trsvcid": "35242" 00:27:38.790 }, 00:27:38.790 "auth": { 00:27:38.790 "state": "completed", 00:27:38.790 "digest": "sha256", 00:27:38.790 "dhgroup": "ffdhe3072" 00:27:38.790 } 00:27:38.790 } 00:27:38.790 ]' 00:27:38.790 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:39.046 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:39.046 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:39.046 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:27:39.046 11:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:39.046 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:39.046 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:39.046 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:39.302 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:27:39.863 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:39.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:39.864 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:39.864 11:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.864 11:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:39.864 11:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.864 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:39.864 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:39.864 11:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:40.121 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:27:40.121 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:40.121 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:40.121 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:27:40.121 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:40.121 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:40.121 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.121 11:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.121 11:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:40.121 11:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.121 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.121 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.379 00:27:40.379 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:40.379 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:40.379 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:40.637 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.637 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:40.637 11:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.637 11:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:40.637 11:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.637 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:40.637 { 00:27:40.637 "cntlid": 19, 00:27:40.637 "qid": 0, 00:27:40.637 "state": "enabled", 00:27:40.637 "listen_address": { 00:27:40.637 "trtype": "TCP", 00:27:40.637 "adrfam": "IPv4", 00:27:40.637 "traddr": "10.0.0.2", 00:27:40.637 "trsvcid": "4420" 00:27:40.637 }, 00:27:40.637 "peer_address": { 00:27:40.637 "trtype": "TCP", 00:27:40.637 "adrfam": "IPv4", 00:27:40.637 "traddr": "10.0.0.1", 00:27:40.637 "trsvcid": "35274" 00:27:40.637 }, 00:27:40.637 "auth": { 00:27:40.637 "state": "completed", 00:27:40.637 "digest": "sha256", 00:27:40.637 "dhgroup": "ffdhe3072" 00:27:40.637 } 00:27:40.637 } 00:27:40.637 ]' 00:27:40.637 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:40.637 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:40.637 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:40.895 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:27:40.895 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:40.895 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:40.895 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:40.895 11:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:41.153 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:27:41.719 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:41.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:41.719 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:41.719 11:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.719 11:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:41.719 11:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.719 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:41.719 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:41.719 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:41.977 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:27:41.977 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:41.977 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:41.977 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:27:41.977 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:41.977 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:41.977 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.977 11:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.977 11:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:41.977 11:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.977 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.977 11:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.235 00:27:42.235 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:42.235 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:42.235 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:42.493 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.493 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:42.493 11:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.493 11:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:42.493 11:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.493 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:42.493 { 00:27:42.493 "cntlid": 21, 00:27:42.493 "qid": 0, 00:27:42.493 "state": "enabled", 00:27:42.493 "listen_address": { 00:27:42.493 "trtype": "TCP", 00:27:42.493 "adrfam": "IPv4", 00:27:42.493 "traddr": "10.0.0.2", 00:27:42.493 "trsvcid": "4420" 00:27:42.493 }, 00:27:42.493 "peer_address": { 00:27:42.493 "trtype": "TCP", 00:27:42.493 "adrfam": "IPv4", 00:27:42.493 "traddr": "10.0.0.1", 00:27:42.493 "trsvcid": "35308" 00:27:42.493 }, 00:27:42.493 "auth": { 00:27:42.493 "state": "completed", 00:27:42.493 "digest": "sha256", 00:27:42.493 "dhgroup": "ffdhe3072" 00:27:42.493 } 00:27:42.493 } 00:27:42.493 ]' 00:27:42.493 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:42.493 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:42.493 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:42.493 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:27:42.493 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:42.750 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:42.750 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:42.750 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:43.007 11:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:27:43.572 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:43.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:43.572 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:43.572 11:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.572 11:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:43.572 11:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.572 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:43.572 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:43.572 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:43.829 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:27:43.829 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:43.829 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:43.829 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:27:43.829 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:43.829 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:43.830 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:27:43.830 11:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.830 11:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:43.830 11:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.830 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:43.830 11:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:44.087 00:27:44.087 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:44.087 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:44.087 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:44.344 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.344 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:44.344 11:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.344 11:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:44.344 11:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.344 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:44.344 { 00:27:44.344 "cntlid": 23, 00:27:44.344 "qid": 0, 00:27:44.344 "state": "enabled", 00:27:44.344 "listen_address": { 00:27:44.344 "trtype": "TCP", 00:27:44.344 "adrfam": "IPv4", 00:27:44.344 "traddr": "10.0.0.2", 00:27:44.344 "trsvcid": "4420" 00:27:44.344 }, 00:27:44.344 "peer_address": { 00:27:44.344 "trtype": "TCP", 00:27:44.344 "adrfam": "IPv4", 00:27:44.344 "traddr": "10.0.0.1", 00:27:44.344 "trsvcid": "48120" 00:27:44.344 }, 00:27:44.344 "auth": { 00:27:44.344 "state": "completed", 00:27:44.344 "digest": "sha256", 00:27:44.344 "dhgroup": "ffdhe3072" 00:27:44.344 } 00:27:44.344 } 00:27:44.344 ]' 00:27:44.344 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:44.344 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:44.344 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:44.344 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:27:44.344 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:44.602 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:44.602 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:44.602 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:44.602 11:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:27:45.534 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:45.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:45.534 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:45.534 11:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.534 11:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:45.534 11:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.534 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.535 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.099 00:27:46.099 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:46.099 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:46.099 11:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:46.100 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.100 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:46.100 11:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.100 11:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:46.100 11:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.358 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:46.358 { 00:27:46.358 "cntlid": 25, 00:27:46.358 "qid": 0, 00:27:46.358 "state": "enabled", 00:27:46.358 "listen_address": { 00:27:46.358 "trtype": "TCP", 00:27:46.358 "adrfam": "IPv4", 00:27:46.358 "traddr": "10.0.0.2", 00:27:46.358 "trsvcid": "4420" 00:27:46.358 }, 00:27:46.358 "peer_address": { 00:27:46.358 "trtype": "TCP", 00:27:46.358 "adrfam": "IPv4", 00:27:46.358 "traddr": "10.0.0.1", 00:27:46.358 "trsvcid": "48134" 00:27:46.358 }, 00:27:46.358 "auth": { 00:27:46.358 "state": "completed", 00:27:46.358 "digest": "sha256", 00:27:46.358 "dhgroup": "ffdhe4096" 00:27:46.358 } 00:27:46.358 } 00:27:46.358 ]' 00:27:46.358 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:46.358 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:46.358 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:46.358 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:46.358 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:46.358 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:46.358 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:46.358 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:46.616 11:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:27:47.259 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:47.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:47.259 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:47.259 11:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.259 11:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:47.259 11:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.259 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:47.259 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:47.259 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:47.516 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:27:47.516 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:47.516 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:47.516 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:47.516 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:47.516 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:47.516 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.516 11:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.516 11:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:47.517 11:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.517 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.517 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.774 00:27:47.775 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:47.775 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:47.775 11:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:48.033 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.033 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:48.033 11:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.033 11:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:48.033 11:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.033 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:48.033 { 00:27:48.033 "cntlid": 27, 00:27:48.033 "qid": 0, 00:27:48.033 "state": "enabled", 00:27:48.033 "listen_address": { 00:27:48.033 "trtype": "TCP", 00:27:48.033 "adrfam": "IPv4", 00:27:48.033 "traddr": "10.0.0.2", 00:27:48.033 "trsvcid": "4420" 00:27:48.033 }, 00:27:48.033 "peer_address": { 00:27:48.033 "trtype": "TCP", 00:27:48.033 "adrfam": "IPv4", 00:27:48.033 "traddr": "10.0.0.1", 00:27:48.033 "trsvcid": "48150" 00:27:48.033 }, 00:27:48.033 "auth": { 00:27:48.033 "state": "completed", 00:27:48.033 "digest": "sha256", 00:27:48.033 "dhgroup": "ffdhe4096" 00:27:48.033 } 00:27:48.033 } 00:27:48.033 ]' 00:27:48.033 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:48.291 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:48.291 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:48.291 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:48.291 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:48.291 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:48.291 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:48.291 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:48.549 11:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:27:49.115 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:49.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:49.115 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:49.115 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.115 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:49.115 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.115 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:49.115 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:49.115 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:49.374 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:27:49.374 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:49.374 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:49.374 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:49.374 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:49.374 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:49.374 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.374 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.374 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:49.374 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.374 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.374 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.632 00:27:49.890 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:49.890 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:49.890 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:49.890 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.890 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:49.890 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.890 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:49.890 11:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.890 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:49.890 { 00:27:49.890 "cntlid": 29, 00:27:49.890 "qid": 0, 00:27:49.890 "state": "enabled", 00:27:49.890 "listen_address": { 00:27:49.890 "trtype": "TCP", 00:27:49.890 "adrfam": "IPv4", 00:27:49.890 "traddr": "10.0.0.2", 00:27:49.890 "trsvcid": "4420" 00:27:49.890 }, 00:27:49.890 "peer_address": { 00:27:49.890 "trtype": "TCP", 00:27:49.890 "adrfam": "IPv4", 00:27:49.890 "traddr": "10.0.0.1", 00:27:49.890 "trsvcid": "48174" 00:27:49.890 }, 00:27:49.890 "auth": { 00:27:49.890 "state": "completed", 00:27:49.890 "digest": "sha256", 00:27:49.890 "dhgroup": "ffdhe4096" 00:27:49.890 } 00:27:49.890 } 00:27:49.890 ]' 00:27:49.890 11:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:50.148 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:50.148 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:50.148 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:50.148 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:50.148 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:50.148 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:50.148 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:50.406 11:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:27:50.972 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:51.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:51.230 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:51.797 00:27:51.797 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:51.797 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:51.797 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:51.797 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.797 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:51.797 11:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.797 11:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:51.797 11:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.797 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:51.797 { 00:27:51.797 "cntlid": 31, 00:27:51.797 "qid": 0, 00:27:51.797 "state": "enabled", 00:27:51.797 "listen_address": { 00:27:51.797 "trtype": "TCP", 00:27:51.797 "adrfam": "IPv4", 00:27:51.797 "traddr": "10.0.0.2", 00:27:51.797 "trsvcid": "4420" 00:27:51.797 }, 00:27:51.797 "peer_address": { 00:27:51.797 "trtype": "TCP", 00:27:51.797 "adrfam": "IPv4", 00:27:51.797 "traddr": "10.0.0.1", 00:27:51.797 "trsvcid": "48214" 00:27:51.797 }, 00:27:51.797 "auth": { 00:27:51.797 "state": "completed", 00:27:51.797 "digest": "sha256", 00:27:51.797 "dhgroup": "ffdhe4096" 00:27:51.797 } 00:27:51.797 } 00:27:51.797 ]' 00:27:51.797 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:52.056 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:52.056 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:52.056 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:27:52.056 11:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:52.056 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:52.056 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:52.056 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:52.314 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:27:52.879 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:52.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:52.880 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:52.880 11:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.880 11:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:52.880 11:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.880 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.880 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:52.880 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.880 11:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:53.137 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:27:53.138 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:53.138 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:53.138 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:53.138 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:27:53.138 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:53.138 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.138 11:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.138 11:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.138 11:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.138 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.138 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.704 00:27:53.704 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:53.704 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:53.704 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:53.962 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.962 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:53.962 11:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.962 11:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.962 11:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.962 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:53.962 { 00:27:53.962 "cntlid": 33, 00:27:53.962 "qid": 0, 00:27:53.962 "state": "enabled", 00:27:53.962 "listen_address": { 00:27:53.962 "trtype": "TCP", 00:27:53.962 "adrfam": "IPv4", 00:27:53.962 "traddr": "10.0.0.2", 00:27:53.962 "trsvcid": "4420" 00:27:53.962 }, 00:27:53.962 "peer_address": { 00:27:53.962 "trtype": "TCP", 00:27:53.962 "adrfam": "IPv4", 00:27:53.962 "traddr": "10.0.0.1", 00:27:53.962 "trsvcid": "53092" 00:27:53.962 }, 00:27:53.962 "auth": { 00:27:53.962 "state": "completed", 00:27:53.962 "digest": "sha256", 00:27:53.962 "dhgroup": "ffdhe6144" 00:27:53.962 } 00:27:53.962 } 00:27:53.962 ]' 00:27:53.962 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:53.962 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:53.962 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:53.962 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:53.962 11:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:53.962 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:53.962 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:53.962 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:54.220 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:27:55.153 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:55.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:55.153 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:55.153 11:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.153 11:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:55.153 11:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.154 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:55.154 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:55.154 11:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:55.154 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:27:55.154 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:55.154 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:55.154 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:55.154 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:27:55.154 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:55.154 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.154 11:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.154 11:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:55.154 11:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.154 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.154 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:55.717 00:27:55.717 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:55.717 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:55.717 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:55.975 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.975 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:55.975 11:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.975 11:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:55.975 11:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.975 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:55.975 { 00:27:55.975 "cntlid": 35, 00:27:55.975 "qid": 0, 00:27:55.975 "state": "enabled", 00:27:55.975 "listen_address": { 00:27:55.975 "trtype": "TCP", 00:27:55.975 "adrfam": "IPv4", 00:27:55.975 "traddr": "10.0.0.2", 00:27:55.975 "trsvcid": "4420" 00:27:55.975 }, 00:27:55.975 "peer_address": { 00:27:55.975 "trtype": "TCP", 00:27:55.975 "adrfam": "IPv4", 00:27:55.975 "traddr": "10.0.0.1", 00:27:55.975 "trsvcid": "53130" 00:27:55.975 }, 00:27:55.975 "auth": { 00:27:55.975 "state": "completed", 00:27:55.975 "digest": "sha256", 00:27:55.975 "dhgroup": "ffdhe6144" 00:27:55.975 } 00:27:55.975 } 00:27:55.975 ]' 00:27:55.975 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:55.975 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:55.975 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:55.975 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:55.975 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:55.975 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:55.975 11:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:55.975 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:56.233 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:27:57.175 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:57.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:57.175 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:57.175 11:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.175 11:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.175 11:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.175 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:57.175 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:57.175 11:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:57.175 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:27:57.175 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:57.175 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:57.175 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:57.175 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:27:57.175 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:57.175 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.175 11:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.175 11:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.175 11:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.175 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.175 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:57.741 00:27:57.741 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:57.741 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:27:57.741 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:57.999 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.999 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:27:57.999 11:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.999 11:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:57.999 11:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.999 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:27:57.999 { 00:27:57.999 "cntlid": 37, 00:27:57.999 "qid": 0, 00:27:57.999 "state": "enabled", 00:27:57.999 "listen_address": { 00:27:57.999 "trtype": "TCP", 00:27:57.999 "adrfam": "IPv4", 00:27:57.999 "traddr": "10.0.0.2", 00:27:57.999 "trsvcid": "4420" 00:27:57.999 }, 00:27:57.999 "peer_address": { 00:27:57.999 "trtype": "TCP", 00:27:57.999 "adrfam": "IPv4", 00:27:57.999 "traddr": "10.0.0.1", 00:27:57.999 "trsvcid": "53154" 00:27:57.999 }, 00:27:57.999 "auth": { 00:27:57.999 "state": "completed", 00:27:57.999 "digest": "sha256", 00:27:57.999 "dhgroup": "ffdhe6144" 00:27:57.999 } 00:27:57.999 } 00:27:57.999 ]' 00:27:57.999 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:27:57.999 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:27:57.999 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:27:57.999 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:27:57.999 11:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:27:57.999 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:27:57.999 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:27:57.999 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:27:58.257 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:27:59.190 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:27:59.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:27:59.190 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:27:59.190 11:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.190 11:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:59.190 11:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.190 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:27:59.190 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:59.190 11:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:59.190 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:27:59.190 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:27:59.190 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:27:59.190 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:27:59.190 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:27:59.190 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:27:59.190 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:27:59.190 11:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.190 11:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:27:59.190 11:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.190 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:59.190 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:27:59.756 00:27:59.756 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:27:59.756 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:27:59.756 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:00.014 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.014 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:00.014 11:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.014 11:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:00.014 11:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.014 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:00.014 { 00:28:00.014 "cntlid": 39, 00:28:00.014 "qid": 0, 00:28:00.014 "state": "enabled", 00:28:00.014 "listen_address": { 00:28:00.014 "trtype": "TCP", 00:28:00.014 "adrfam": "IPv4", 00:28:00.014 "traddr": "10.0.0.2", 00:28:00.014 "trsvcid": "4420" 00:28:00.014 }, 00:28:00.014 "peer_address": { 00:28:00.014 "trtype": "TCP", 00:28:00.014 "adrfam": "IPv4", 00:28:00.014 "traddr": "10.0.0.1", 00:28:00.014 "trsvcid": "53186" 00:28:00.014 }, 00:28:00.014 "auth": { 00:28:00.014 "state": "completed", 00:28:00.014 "digest": "sha256", 00:28:00.014 "dhgroup": "ffdhe6144" 00:28:00.014 } 00:28:00.014 } 00:28:00.014 ]' 00:28:00.014 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:00.014 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:00.014 11:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:00.014 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:00.014 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:00.014 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:00.014 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:00.014 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:00.272 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:28:01.206 11:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:01.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.206 11:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.207 11:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:01.207 11:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.207 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.207 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.771 00:28:02.029 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:02.029 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:02.029 11:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:02.029 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.029 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:02.029 11:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.029 11:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:02.029 11:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.029 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:02.029 { 00:28:02.029 "cntlid": 41, 00:28:02.029 "qid": 0, 00:28:02.029 "state": "enabled", 00:28:02.029 "listen_address": { 00:28:02.029 "trtype": "TCP", 00:28:02.029 "adrfam": "IPv4", 00:28:02.029 "traddr": "10.0.0.2", 00:28:02.029 "trsvcid": "4420" 00:28:02.029 }, 00:28:02.029 "peer_address": { 00:28:02.029 "trtype": "TCP", 00:28:02.029 "adrfam": "IPv4", 00:28:02.029 "traddr": "10.0.0.1", 00:28:02.029 "trsvcid": "53214" 00:28:02.029 }, 00:28:02.029 "auth": { 00:28:02.029 "state": "completed", 00:28:02.029 "digest": "sha256", 00:28:02.029 "dhgroup": "ffdhe8192" 00:28:02.029 } 00:28:02.029 } 00:28:02.029 ]' 00:28:02.029 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:02.287 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:02.287 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:02.287 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:02.287 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:02.287 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:02.287 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:02.287 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:02.545 11:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:28:03.109 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:03.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:03.109 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:03.109 11:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.109 11:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:03.109 11:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.109 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:03.109 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:03.110 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:03.367 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:28:03.367 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:03.367 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:03.367 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:03.367 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:03.367 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:03.367 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.367 11:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.367 11:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:03.367 11:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.367 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.367 11:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.933 00:28:03.933 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:03.933 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:03.933 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:04.191 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.191 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:04.191 11:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.191 11:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:04.191 11:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.191 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:04.191 { 00:28:04.191 "cntlid": 43, 00:28:04.191 "qid": 0, 00:28:04.191 "state": "enabled", 00:28:04.191 "listen_address": { 00:28:04.191 "trtype": "TCP", 00:28:04.191 "adrfam": "IPv4", 00:28:04.191 "traddr": "10.0.0.2", 00:28:04.191 "trsvcid": "4420" 00:28:04.191 }, 00:28:04.191 "peer_address": { 00:28:04.191 "trtype": "TCP", 00:28:04.191 "adrfam": "IPv4", 00:28:04.191 "traddr": "10.0.0.1", 00:28:04.191 "trsvcid": "45678" 00:28:04.191 }, 00:28:04.191 "auth": { 00:28:04.191 "state": "completed", 00:28:04.191 "digest": "sha256", 00:28:04.191 "dhgroup": "ffdhe8192" 00:28:04.191 } 00:28:04.191 } 00:28:04.191 ]' 00:28:04.191 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:04.448 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:04.448 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:04.448 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:04.448 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:04.448 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:04.448 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:04.448 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:04.706 11:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:28:05.271 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:05.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:05.271 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:05.271 11:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.271 11:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.529 11:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:06.463 00:28:06.463 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:06.463 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:06.463 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:06.463 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.463 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:06.463 11:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.463 11:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:06.463 11:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.463 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:06.463 { 00:28:06.463 "cntlid": 45, 00:28:06.463 "qid": 0, 00:28:06.463 "state": "enabled", 00:28:06.463 "listen_address": { 00:28:06.463 "trtype": "TCP", 00:28:06.463 "adrfam": "IPv4", 00:28:06.463 "traddr": "10.0.0.2", 00:28:06.463 "trsvcid": "4420" 00:28:06.463 }, 00:28:06.463 "peer_address": { 00:28:06.463 "trtype": "TCP", 00:28:06.463 "adrfam": "IPv4", 00:28:06.463 "traddr": "10.0.0.1", 00:28:06.463 "trsvcid": "45716" 00:28:06.463 }, 00:28:06.463 "auth": { 00:28:06.463 "state": "completed", 00:28:06.463 "digest": "sha256", 00:28:06.463 "dhgroup": "ffdhe8192" 00:28:06.463 } 00:28:06.463 } 00:28:06.463 ]' 00:28:06.463 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:06.463 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:06.463 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:06.721 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:06.721 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:06.721 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:06.721 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:06.721 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:06.979 11:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:28:07.545 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:07.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:07.545 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:07.545 11:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.545 11:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:07.545 11:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.545 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:07.545 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:07.545 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:07.804 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:28:07.804 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:07.804 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:28:07.804 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:07.804 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:07.804 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:07.804 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:28:07.804 11:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:07.804 11:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:07.804 11:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:07.804 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:07.804 11:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:08.459 00:28:08.459 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:08.459 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:08.459 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:08.717 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.717 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:08.717 11:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.717 11:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:08.717 11:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.717 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:08.717 { 00:28:08.717 "cntlid": 47, 00:28:08.717 "qid": 0, 00:28:08.717 "state": "enabled", 00:28:08.717 "listen_address": { 00:28:08.717 "trtype": "TCP", 00:28:08.717 "adrfam": "IPv4", 00:28:08.717 "traddr": "10.0.0.2", 00:28:08.717 "trsvcid": "4420" 00:28:08.717 }, 00:28:08.717 "peer_address": { 00:28:08.717 "trtype": "TCP", 00:28:08.717 "adrfam": "IPv4", 00:28:08.717 "traddr": "10.0.0.1", 00:28:08.717 "trsvcid": "45740" 00:28:08.717 }, 00:28:08.717 "auth": { 00:28:08.717 "state": "completed", 00:28:08.717 "digest": "sha256", 00:28:08.717 "dhgroup": "ffdhe8192" 00:28:08.717 } 00:28:08.717 } 00:28:08.717 ]' 00:28:08.717 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:08.717 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:28:08.717 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:08.717 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:08.717 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:08.975 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:08.975 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:08.975 11:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:08.975 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:09.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.910 11:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.168 00:28:10.168 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:10.168 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:10.168 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:10.426 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.426 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:10.426 11:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.426 11:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:10.426 11:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.426 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:10.426 { 00:28:10.426 "cntlid": 49, 00:28:10.426 "qid": 0, 00:28:10.426 "state": "enabled", 00:28:10.426 "listen_address": { 00:28:10.426 "trtype": "TCP", 00:28:10.426 "adrfam": "IPv4", 00:28:10.426 "traddr": "10.0.0.2", 00:28:10.426 "trsvcid": "4420" 00:28:10.426 }, 00:28:10.426 "peer_address": { 00:28:10.426 "trtype": "TCP", 00:28:10.426 "adrfam": "IPv4", 00:28:10.426 "traddr": "10.0.0.1", 00:28:10.426 "trsvcid": "45762" 00:28:10.426 }, 00:28:10.426 "auth": { 00:28:10.426 "state": "completed", 00:28:10.426 "digest": "sha384", 00:28:10.426 "dhgroup": "null" 00:28:10.426 } 00:28:10.426 } 00:28:10.426 ]' 00:28:10.426 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:10.426 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:10.426 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:10.683 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:28:10.683 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:10.683 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:10.683 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:10.683 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:10.941 11:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:28:11.506 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:11.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:11.506 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:11.506 11:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.506 11:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:11.506 11:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.506 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:11.506 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:11.506 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:11.765 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:28:11.765 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:11.765 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:11.765 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:11.765 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:11.765 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:11.765 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.765 11:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:11.765 11:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:11.765 11:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:11.765 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.765 11:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:12.023 00:28:12.023 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:12.023 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:12.023 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:12.281 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.281 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:12.281 11:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:12.281 11:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:12.281 11:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:12.281 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:12.281 { 00:28:12.281 "cntlid": 51, 00:28:12.281 "qid": 0, 00:28:12.281 "state": "enabled", 00:28:12.281 "listen_address": { 00:28:12.281 "trtype": "TCP", 00:28:12.281 "adrfam": "IPv4", 00:28:12.281 "traddr": "10.0.0.2", 00:28:12.281 "trsvcid": "4420" 00:28:12.281 }, 00:28:12.281 "peer_address": { 00:28:12.281 "trtype": "TCP", 00:28:12.281 "adrfam": "IPv4", 00:28:12.281 "traddr": "10.0.0.1", 00:28:12.281 "trsvcid": "45790" 00:28:12.281 }, 00:28:12.281 "auth": { 00:28:12.281 "state": "completed", 00:28:12.281 "digest": "sha384", 00:28:12.281 "dhgroup": "null" 00:28:12.281 } 00:28:12.281 } 00:28:12.281 ]' 00:28:12.281 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:12.539 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:12.539 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:12.539 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:28:12.539 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:12.539 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:12.539 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:12.539 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:12.797 11:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:28:13.363 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:13.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:13.363 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:13.363 11:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.363 11:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:13.363 11:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.363 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:13.363 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:13.363 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:13.621 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:28:13.622 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:13.622 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:13.622 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:13.622 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:13.622 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:13.622 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.622 11:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.622 11:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:13.622 11:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.622 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.622 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.880 00:28:13.880 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:13.880 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:13.880 11:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:14.138 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.138 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:14.138 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.138 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:14.138 11:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.138 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:14.138 { 00:28:14.138 "cntlid": 53, 00:28:14.138 "qid": 0, 00:28:14.138 "state": "enabled", 00:28:14.138 "listen_address": { 00:28:14.138 "trtype": "TCP", 00:28:14.138 "adrfam": "IPv4", 00:28:14.138 "traddr": "10.0.0.2", 00:28:14.138 "trsvcid": "4420" 00:28:14.138 }, 00:28:14.138 "peer_address": { 00:28:14.138 "trtype": "TCP", 00:28:14.138 "adrfam": "IPv4", 00:28:14.138 "traddr": "10.0.0.1", 00:28:14.138 "trsvcid": "49612" 00:28:14.138 }, 00:28:14.138 "auth": { 00:28:14.138 "state": "completed", 00:28:14.138 "digest": "sha384", 00:28:14.138 "dhgroup": "null" 00:28:14.138 } 00:28:14.138 } 00:28:14.138 ]' 00:28:14.138 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:14.396 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:14.396 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:14.396 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:28:14.396 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:14.396 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:14.396 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:14.396 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:14.652 11:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:28:15.216 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:15.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:15.216 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:15.216 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:15.216 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:15.474 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:16.040 00:28:16.040 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:16.040 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:16.040 11:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:16.040 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.040 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:16.040 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:16.040 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:16.040 11:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:16.040 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:16.040 { 00:28:16.040 "cntlid": 55, 00:28:16.040 "qid": 0, 00:28:16.040 "state": "enabled", 00:28:16.040 "listen_address": { 00:28:16.040 "trtype": "TCP", 00:28:16.040 "adrfam": "IPv4", 00:28:16.040 "traddr": "10.0.0.2", 00:28:16.040 "trsvcid": "4420" 00:28:16.040 }, 00:28:16.040 "peer_address": { 00:28:16.040 "trtype": "TCP", 00:28:16.040 "adrfam": "IPv4", 00:28:16.040 "traddr": "10.0.0.1", 00:28:16.040 "trsvcid": "49642" 00:28:16.040 }, 00:28:16.040 "auth": { 00:28:16.040 "state": "completed", 00:28:16.040 "digest": "sha384", 00:28:16.040 "dhgroup": "null" 00:28:16.040 } 00:28:16.040 } 00:28:16.040 ]' 00:28:16.040 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:16.298 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:16.298 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:16.298 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:28:16.298 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:16.298 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:16.298 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:16.298 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:16.556 11:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:28:17.122 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:17.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:17.122 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:17.122 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:17.122 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:17.122 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:17.122 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.122 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:17.122 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.122 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.389 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:28:17.389 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:17.389 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:17.389 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:17.389 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:17.389 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:17.389 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.389 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:17.389 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:17.389 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:17.389 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.389 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.646 00:28:17.646 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:17.646 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:17.646 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:17.903 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.903 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:17.903 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:17.903 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:17.903 11:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:17.903 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:17.903 { 00:28:17.903 "cntlid": 57, 00:28:17.903 "qid": 0, 00:28:17.903 "state": "enabled", 00:28:17.903 "listen_address": { 00:28:17.903 "trtype": "TCP", 00:28:17.903 "adrfam": "IPv4", 00:28:17.903 "traddr": "10.0.0.2", 00:28:17.903 "trsvcid": "4420" 00:28:17.903 }, 00:28:17.903 "peer_address": { 00:28:17.903 "trtype": "TCP", 00:28:17.903 "adrfam": "IPv4", 00:28:17.903 "traddr": "10.0.0.1", 00:28:17.903 "trsvcid": "49678" 00:28:17.903 }, 00:28:17.903 "auth": { 00:28:17.903 "state": "completed", 00:28:17.903 "digest": "sha384", 00:28:17.903 "dhgroup": "ffdhe2048" 00:28:17.904 } 00:28:17.904 } 00:28:17.904 ]' 00:28:17.904 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:17.904 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:17.904 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:17.904 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:17.904 11:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:18.161 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:18.161 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:18.161 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:18.418 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:28:18.983 11:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:18.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:18.983 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:18.983 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:18.983 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:18.983 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:18.983 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:18.983 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.983 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:19.241 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:28:19.241 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:19.241 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:19.241 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:19.241 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:19.241 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:19.241 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.241 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:19.241 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.241 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:19.241 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.241 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.498 00:28:19.498 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:19.498 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:19.498 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:19.755 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.755 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:19.755 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:19.755 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.755 11:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:19.755 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:19.755 { 00:28:19.755 "cntlid": 59, 00:28:19.755 "qid": 0, 00:28:19.755 "state": "enabled", 00:28:19.755 "listen_address": { 00:28:19.755 "trtype": "TCP", 00:28:19.755 "adrfam": "IPv4", 00:28:19.755 "traddr": "10.0.0.2", 00:28:19.755 "trsvcid": "4420" 00:28:19.755 }, 00:28:19.755 "peer_address": { 00:28:19.755 "trtype": "TCP", 00:28:19.755 "adrfam": "IPv4", 00:28:19.755 "traddr": "10.0.0.1", 00:28:19.755 "trsvcid": "49700" 00:28:19.755 }, 00:28:19.755 "auth": { 00:28:19.755 "state": "completed", 00:28:19.755 "digest": "sha384", 00:28:19.755 "dhgroup": "ffdhe2048" 00:28:19.755 } 00:28:19.755 } 00:28:19.755 ]' 00:28:19.755 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:19.755 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:19.755 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:20.012 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:20.012 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:20.012 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:20.012 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:20.012 11:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:20.269 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:28:20.835 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:20.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:20.835 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:20.835 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:20.835 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:20.835 11:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:20.835 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:20.835 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:20.835 11:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:21.094 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:28:21.094 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:21.094 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:21.094 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:21.094 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:21.094 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:21.094 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:21.094 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.094 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:21.094 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.094 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:21.094 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:21.352 00:28:21.352 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:21.352 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:21.352 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:21.609 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.609 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:21.610 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:21.610 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:21.610 11:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:21.610 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:21.610 { 00:28:21.610 "cntlid": 61, 00:28:21.610 "qid": 0, 00:28:21.610 "state": "enabled", 00:28:21.610 "listen_address": { 00:28:21.610 "trtype": "TCP", 00:28:21.610 "adrfam": "IPv4", 00:28:21.610 "traddr": "10.0.0.2", 00:28:21.610 "trsvcid": "4420" 00:28:21.610 }, 00:28:21.610 "peer_address": { 00:28:21.610 "trtype": "TCP", 00:28:21.610 "adrfam": "IPv4", 00:28:21.610 "traddr": "10.0.0.1", 00:28:21.610 "trsvcid": "49718" 00:28:21.610 }, 00:28:21.610 "auth": { 00:28:21.610 "state": "completed", 00:28:21.610 "digest": "sha384", 00:28:21.610 "dhgroup": "ffdhe2048" 00:28:21.610 } 00:28:21.610 } 00:28:21.610 ]' 00:28:21.610 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:21.610 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:21.610 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:21.610 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:21.610 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:21.867 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:21.867 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:21.867 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:21.868 11:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:28:22.801 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:22.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:22.801 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:22.801 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:22.801 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:22.801 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:22.801 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:22.801 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:22.801 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:23.059 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:28:23.059 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:23.059 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:23.059 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:28:23.059 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:23.059 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:23.059 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:28:23.059 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.059 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:23.059 11:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.059 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:23.059 11:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:23.318 00:28:23.318 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:23.318 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:23.318 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:23.576 { 00:28:23.576 "cntlid": 63, 00:28:23.576 "qid": 0, 00:28:23.576 "state": "enabled", 00:28:23.576 "listen_address": { 00:28:23.576 "trtype": "TCP", 00:28:23.576 "adrfam": "IPv4", 00:28:23.576 "traddr": "10.0.0.2", 00:28:23.576 "trsvcid": "4420" 00:28:23.576 }, 00:28:23.576 "peer_address": { 00:28:23.576 "trtype": "TCP", 00:28:23.576 "adrfam": "IPv4", 00:28:23.576 "traddr": "10.0.0.1", 00:28:23.576 "trsvcid": "52008" 00:28:23.576 }, 00:28:23.576 "auth": { 00:28:23.576 "state": "completed", 00:28:23.576 "digest": "sha384", 00:28:23.576 "dhgroup": "ffdhe2048" 00:28:23.576 } 00:28:23.576 } 00:28:23.576 ]' 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:23.576 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:23.834 11:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:24.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.768 11:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:25.026 00:28:25.284 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:25.284 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:25.284 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:25.284 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.284 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:25.284 11:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:25.284 11:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:25.542 11:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:25.542 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:25.542 { 00:28:25.542 "cntlid": 65, 00:28:25.542 "qid": 0, 00:28:25.542 "state": "enabled", 00:28:25.542 "listen_address": { 00:28:25.542 "trtype": "TCP", 00:28:25.542 "adrfam": "IPv4", 00:28:25.542 "traddr": "10.0.0.2", 00:28:25.542 "trsvcid": "4420" 00:28:25.542 }, 00:28:25.542 "peer_address": { 00:28:25.542 "trtype": "TCP", 00:28:25.542 "adrfam": "IPv4", 00:28:25.542 "traddr": "10.0.0.1", 00:28:25.542 "trsvcid": "52038" 00:28:25.542 }, 00:28:25.542 "auth": { 00:28:25.542 "state": "completed", 00:28:25.542 "digest": "sha384", 00:28:25.542 "dhgroup": "ffdhe3072" 00:28:25.542 } 00:28:25.542 } 00:28:25.542 ]' 00:28:25.542 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:25.542 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:25.542 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:25.542 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:25.542 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:25.542 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:25.542 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:25.542 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:25.800 11:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:26.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.735 11:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.993 00:28:26.993 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:26.993 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:26.993 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:27.251 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.251 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:27.251 11:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:27.251 11:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:27.251 11:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:27.251 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:27.251 { 00:28:27.251 "cntlid": 67, 00:28:27.251 "qid": 0, 00:28:27.251 "state": "enabled", 00:28:27.251 "listen_address": { 00:28:27.251 "trtype": "TCP", 00:28:27.251 "adrfam": "IPv4", 00:28:27.251 "traddr": "10.0.0.2", 00:28:27.251 "trsvcid": "4420" 00:28:27.251 }, 00:28:27.251 "peer_address": { 00:28:27.251 "trtype": "TCP", 00:28:27.251 "adrfam": "IPv4", 00:28:27.251 "traddr": "10.0.0.1", 00:28:27.251 "trsvcid": "52070" 00:28:27.251 }, 00:28:27.251 "auth": { 00:28:27.251 "state": "completed", 00:28:27.251 "digest": "sha384", 00:28:27.251 "dhgroup": "ffdhe3072" 00:28:27.251 } 00:28:27.251 } 00:28:27.251 ]' 00:28:27.251 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:27.251 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:27.509 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:27.509 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:27.509 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:27.509 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:27.509 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:27.509 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:27.767 11:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:28:28.333 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:28.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:28.333 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:28.333 11:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.333 11:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:28.333 11:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.333 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:28.333 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:28.333 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:28.623 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:28:28.623 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:28.623 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:28.623 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:28.623 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:28.623 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:28.623 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.623 11:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.623 11:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:28.623 11:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.623 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.623 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.920 00:28:28.920 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:28.920 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:28.920 11:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:29.177 11:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.177 11:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:29.177 11:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:29.177 11:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:29.177 11:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:29.177 11:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:29.177 { 00:28:29.177 "cntlid": 69, 00:28:29.177 "qid": 0, 00:28:29.177 "state": "enabled", 00:28:29.177 "listen_address": { 00:28:29.177 "trtype": "TCP", 00:28:29.177 "adrfam": "IPv4", 00:28:29.177 "traddr": "10.0.0.2", 00:28:29.177 "trsvcid": "4420" 00:28:29.177 }, 00:28:29.177 "peer_address": { 00:28:29.177 "trtype": "TCP", 00:28:29.177 "adrfam": "IPv4", 00:28:29.177 "traddr": "10.0.0.1", 00:28:29.177 "trsvcid": "52104" 00:28:29.177 }, 00:28:29.177 "auth": { 00:28:29.177 "state": "completed", 00:28:29.177 "digest": "sha384", 00:28:29.177 "dhgroup": "ffdhe3072" 00:28:29.177 } 00:28:29.177 } 00:28:29.177 ]' 00:28:29.177 11:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:29.177 11:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:29.177 11:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:29.435 11:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:29.435 11:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:29.435 11:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:29.435 11:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:29.435 11:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:29.692 11:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:28:30.257 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:30.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:30.257 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:30.257 11:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:30.257 11:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:30.257 11:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:30.257 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:30.257 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:30.257 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:30.515 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:28:30.515 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:30.515 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:30.515 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:28:30.515 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:30.515 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:30.515 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:28:30.515 11:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:30.515 11:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:30.515 11:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:30.515 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:30.515 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:30.773 00:28:31.031 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:31.031 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:31.031 11:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:31.031 11:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.031 11:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:31.031 11:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:31.031 11:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:31.031 11:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:31.031 11:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:31.031 { 00:28:31.031 "cntlid": 71, 00:28:31.031 "qid": 0, 00:28:31.031 "state": "enabled", 00:28:31.031 "listen_address": { 00:28:31.031 "trtype": "TCP", 00:28:31.031 "adrfam": "IPv4", 00:28:31.031 "traddr": "10.0.0.2", 00:28:31.031 "trsvcid": "4420" 00:28:31.031 }, 00:28:31.031 "peer_address": { 00:28:31.031 "trtype": "TCP", 00:28:31.031 "adrfam": "IPv4", 00:28:31.031 "traddr": "10.0.0.1", 00:28:31.031 "trsvcid": "52122" 00:28:31.031 }, 00:28:31.031 "auth": { 00:28:31.031 "state": "completed", 00:28:31.031 "digest": "sha384", 00:28:31.031 "dhgroup": "ffdhe3072" 00:28:31.031 } 00:28:31.031 } 00:28:31.031 ]' 00:28:31.289 11:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:31.289 11:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:31.289 11:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:31.289 11:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:28:31.289 11:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:31.289 11:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:31.289 11:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:31.289 11:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:31.547 11:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:28:32.114 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:32.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:32.372 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:32.372 11:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.372 11:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:32.372 11:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.372 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:32.372 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:32.372 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:32.372 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:32.372 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:28:32.372 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:32.373 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:32.373 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:32.373 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:32.373 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:32.373 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:32.373 11:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.373 11:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:32.373 11:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.373 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:32.373 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:32.939 00:28:32.939 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:32.939 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:32.939 11:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:32.939 11:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.939 11:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:32.939 11:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.939 11:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:33.198 11:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.198 11:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:33.198 { 00:28:33.198 "cntlid": 73, 00:28:33.198 "qid": 0, 00:28:33.198 "state": "enabled", 00:28:33.198 "listen_address": { 00:28:33.198 "trtype": "TCP", 00:28:33.198 "adrfam": "IPv4", 00:28:33.198 "traddr": "10.0.0.2", 00:28:33.198 "trsvcid": "4420" 00:28:33.198 }, 00:28:33.198 "peer_address": { 00:28:33.198 "trtype": "TCP", 00:28:33.198 "adrfam": "IPv4", 00:28:33.198 "traddr": "10.0.0.1", 00:28:33.198 "trsvcid": "52150" 00:28:33.198 }, 00:28:33.198 "auth": { 00:28:33.198 "state": "completed", 00:28:33.198 "digest": "sha384", 00:28:33.198 "dhgroup": "ffdhe4096" 00:28:33.198 } 00:28:33.198 } 00:28:33.198 ]' 00:28:33.198 11:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:33.198 11:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:33.198 11:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:33.198 11:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:33.198 11:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:33.198 11:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:33.198 11:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:33.198 11:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:33.456 11:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:28:34.022 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:34.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:34.022 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:34.022 11:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.022 11:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.280 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.846 00:28:34.846 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:34.846 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:34.846 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:34.846 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.846 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:34.846 11:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:34.846 11:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:35.104 11:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.104 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:35.104 { 00:28:35.104 "cntlid": 75, 00:28:35.104 "qid": 0, 00:28:35.104 "state": "enabled", 00:28:35.104 "listen_address": { 00:28:35.104 "trtype": "TCP", 00:28:35.104 "adrfam": "IPv4", 00:28:35.104 "traddr": "10.0.0.2", 00:28:35.104 "trsvcid": "4420" 00:28:35.104 }, 00:28:35.104 "peer_address": { 00:28:35.104 "trtype": "TCP", 00:28:35.104 "adrfam": "IPv4", 00:28:35.104 "traddr": "10.0.0.1", 00:28:35.104 "trsvcid": "39944" 00:28:35.104 }, 00:28:35.104 "auth": { 00:28:35.104 "state": "completed", 00:28:35.104 "digest": "sha384", 00:28:35.104 "dhgroup": "ffdhe4096" 00:28:35.104 } 00:28:35.104 } 00:28:35.104 ]' 00:28:35.104 11:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:35.104 11:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:35.104 11:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:35.104 11:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:35.104 11:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:35.104 11:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:35.104 11:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:35.104 11:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:35.362 11:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:28:36.296 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:36.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:36.296 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:36.296 11:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:36.296 11:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:36.296 11:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:36.296 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:36.296 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:36.296 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:36.296 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:28:36.297 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:36.297 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:36.297 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:36.297 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:36.297 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:36.297 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:36.297 11:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:36.297 11:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:36.297 11:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:36.297 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:36.297 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:36.554 00:28:36.554 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:36.554 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:36.555 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:36.812 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.812 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:36.812 11:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:36.812 11:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:36.812 11:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:36.812 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:36.812 { 00:28:36.812 "cntlid": 77, 00:28:36.812 "qid": 0, 00:28:36.813 "state": "enabled", 00:28:36.813 "listen_address": { 00:28:36.813 "trtype": "TCP", 00:28:36.813 "adrfam": "IPv4", 00:28:36.813 "traddr": "10.0.0.2", 00:28:36.813 "trsvcid": "4420" 00:28:36.813 }, 00:28:36.813 "peer_address": { 00:28:36.813 "trtype": "TCP", 00:28:36.813 "adrfam": "IPv4", 00:28:36.813 "traddr": "10.0.0.1", 00:28:36.813 "trsvcid": "39978" 00:28:36.813 }, 00:28:36.813 "auth": { 00:28:36.813 "state": "completed", 00:28:36.813 "digest": "sha384", 00:28:36.813 "dhgroup": "ffdhe4096" 00:28:36.813 } 00:28:36.813 } 00:28:36.813 ]' 00:28:36.813 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:37.071 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:37.071 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:37.071 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:37.071 11:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:37.071 11:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:37.071 11:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:37.071 11:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:37.328 11:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:28:37.893 11:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:37.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:37.893 11:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:37.893 11:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:37.893 11:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:37.893 11:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:37.893 11:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:37.893 11:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:37.893 11:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:38.151 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:28:38.151 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:38.151 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:38.151 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:28:38.151 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:38.151 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:38.151 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:28:38.151 11:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:38.151 11:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:38.151 11:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:38.151 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:38.151 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:38.409 00:28:38.409 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:38.409 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:38.409 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:38.668 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.668 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:38.668 11:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:38.668 11:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:38.668 11:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:38.668 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:38.668 { 00:28:38.668 "cntlid": 79, 00:28:38.668 "qid": 0, 00:28:38.668 "state": "enabled", 00:28:38.668 "listen_address": { 00:28:38.668 "trtype": "TCP", 00:28:38.668 "adrfam": "IPv4", 00:28:38.668 "traddr": "10.0.0.2", 00:28:38.668 "trsvcid": "4420" 00:28:38.668 }, 00:28:38.668 "peer_address": { 00:28:38.668 "trtype": "TCP", 00:28:38.668 "adrfam": "IPv4", 00:28:38.668 "traddr": "10.0.0.1", 00:28:38.668 "trsvcid": "39998" 00:28:38.668 }, 00:28:38.668 "auth": { 00:28:38.668 "state": "completed", 00:28:38.668 "digest": "sha384", 00:28:38.668 "dhgroup": "ffdhe4096" 00:28:38.668 } 00:28:38.668 } 00:28:38.668 ]' 00:28:38.668 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:38.926 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:38.926 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:38.926 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:28:38.926 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:38.926 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:38.926 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:38.926 11:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:39.184 11:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:28:39.750 11:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:39.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:39.750 11:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:39.750 11:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.750 11:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:39.750 11:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:39.750 11:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.750 11:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:39.750 11:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:39.750 11:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:40.007 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:28:40.007 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:40.007 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:40.007 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:40.007 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:40.007 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:40.008 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.008 11:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.008 11:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:40.008 11:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.008 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.008 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.573 00:28:40.573 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:40.573 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:40.573 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:40.831 { 00:28:40.831 "cntlid": 81, 00:28:40.831 "qid": 0, 00:28:40.831 "state": "enabled", 00:28:40.831 "listen_address": { 00:28:40.831 "trtype": "TCP", 00:28:40.831 "adrfam": "IPv4", 00:28:40.831 "traddr": "10.0.0.2", 00:28:40.831 "trsvcid": "4420" 00:28:40.831 }, 00:28:40.831 "peer_address": { 00:28:40.831 "trtype": "TCP", 00:28:40.831 "adrfam": "IPv4", 00:28:40.831 "traddr": "10.0.0.1", 00:28:40.831 "trsvcid": "40014" 00:28:40.831 }, 00:28:40.831 "auth": { 00:28:40.831 "state": "completed", 00:28:40.831 "digest": "sha384", 00:28:40.831 "dhgroup": "ffdhe6144" 00:28:40.831 } 00:28:40.831 } 00:28:40.831 ]' 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:40.831 11:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:41.090 11:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:28:41.657 11:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:41.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:41.915 11:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:41.915 11:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.915 11:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:41.915 11:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.915 11:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:41.915 11:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:41.915 11:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:41.915 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:28:41.915 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:41.915 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:41.915 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:41.915 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:41.915 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:41.915 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.915 11:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.915 11:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:41.915 11:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.915 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.915 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.481 00:28:42.481 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:42.481 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:42.481 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:42.738 { 00:28:42.738 "cntlid": 83, 00:28:42.738 "qid": 0, 00:28:42.738 "state": "enabled", 00:28:42.738 "listen_address": { 00:28:42.738 "trtype": "TCP", 00:28:42.738 "adrfam": "IPv4", 00:28:42.738 "traddr": "10.0.0.2", 00:28:42.738 "trsvcid": "4420" 00:28:42.738 }, 00:28:42.738 "peer_address": { 00:28:42.738 "trtype": "TCP", 00:28:42.738 "adrfam": "IPv4", 00:28:42.738 "traddr": "10.0.0.1", 00:28:42.738 "trsvcid": "40034" 00:28:42.738 }, 00:28:42.738 "auth": { 00:28:42.738 "state": "completed", 00:28:42.738 "digest": "sha384", 00:28:42.738 "dhgroup": "ffdhe6144" 00:28:42.738 } 00:28:42.738 } 00:28:42.738 ]' 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:42.738 11:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:42.996 11:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:43.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.927 11:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:43.927 11:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.927 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.927 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.491 00:28:44.491 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:44.491 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:44.491 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:44.749 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.749 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:44.749 11:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.749 11:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:44.749 11:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.749 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:44.749 { 00:28:44.749 "cntlid": 85, 00:28:44.749 "qid": 0, 00:28:44.749 "state": "enabled", 00:28:44.749 "listen_address": { 00:28:44.749 "trtype": "TCP", 00:28:44.749 "adrfam": "IPv4", 00:28:44.749 "traddr": "10.0.0.2", 00:28:44.749 "trsvcid": "4420" 00:28:44.749 }, 00:28:44.749 "peer_address": { 00:28:44.749 "trtype": "TCP", 00:28:44.749 "adrfam": "IPv4", 00:28:44.749 "traddr": "10.0.0.1", 00:28:44.749 "trsvcid": "57026" 00:28:44.749 }, 00:28:44.750 "auth": { 00:28:44.750 "state": "completed", 00:28:44.750 "digest": "sha384", 00:28:44.750 "dhgroup": "ffdhe6144" 00:28:44.750 } 00:28:44.750 } 00:28:44.750 ]' 00:28:44.750 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:44.750 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:44.750 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:44.750 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:44.750 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:44.750 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:44.750 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:44.750 11:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:45.008 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:28:45.941 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:45.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:45.941 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:45.941 11:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.941 11:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:45.941 11:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.941 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:45.941 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:45.941 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:45.941 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:28:45.941 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:45.941 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:45.941 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:28:45.941 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:45.942 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:45.942 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:28:45.942 11:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.942 11:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:45.942 11:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.942 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:45.942 11:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:46.507 00:28:46.507 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:46.508 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:46.508 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:46.765 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.765 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:46.765 11:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:46.765 11:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:46.765 11:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:46.765 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:46.765 { 00:28:46.765 "cntlid": 87, 00:28:46.765 "qid": 0, 00:28:46.765 "state": "enabled", 00:28:46.765 "listen_address": { 00:28:46.765 "trtype": "TCP", 00:28:46.765 "adrfam": "IPv4", 00:28:46.765 "traddr": "10.0.0.2", 00:28:46.765 "trsvcid": "4420" 00:28:46.765 }, 00:28:46.765 "peer_address": { 00:28:46.765 "trtype": "TCP", 00:28:46.765 "adrfam": "IPv4", 00:28:46.765 "traddr": "10.0.0.1", 00:28:46.765 "trsvcid": "57052" 00:28:46.765 }, 00:28:46.765 "auth": { 00:28:46.765 "state": "completed", 00:28:46.765 "digest": "sha384", 00:28:46.765 "dhgroup": "ffdhe6144" 00:28:46.765 } 00:28:46.765 } 00:28:46.765 ]' 00:28:46.765 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:46.765 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:46.765 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:46.765 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:28:46.765 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:46.765 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:46.766 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:46.766 11:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:47.024 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:47.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:47.956 11:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:47.957 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:47.957 11:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:48.523 00:28:48.523 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:48.523 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:48.523 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:48.781 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.781 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:48.781 11:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.781 11:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:48.781 11:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.781 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:48.781 { 00:28:48.781 "cntlid": 89, 00:28:48.781 "qid": 0, 00:28:48.781 "state": "enabled", 00:28:48.781 "listen_address": { 00:28:48.781 "trtype": "TCP", 00:28:48.781 "adrfam": "IPv4", 00:28:48.781 "traddr": "10.0.0.2", 00:28:48.781 "trsvcid": "4420" 00:28:48.781 }, 00:28:48.781 "peer_address": { 00:28:48.781 "trtype": "TCP", 00:28:48.781 "adrfam": "IPv4", 00:28:48.781 "traddr": "10.0.0.1", 00:28:48.781 "trsvcid": "57078" 00:28:48.781 }, 00:28:48.781 "auth": { 00:28:48.781 "state": "completed", 00:28:48.781 "digest": "sha384", 00:28:48.781 "dhgroup": "ffdhe8192" 00:28:48.781 } 00:28:48.781 } 00:28:48.781 ]' 00:28:48.781 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:48.781 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:48.781 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:48.781 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:48.781 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:49.076 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:49.076 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:49.076 11:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:49.375 11:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:28:49.939 11:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:49.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:49.939 11:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:49.939 11:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.939 11:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:49.939 11:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.939 11:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:49.939 11:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:49.939 11:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:49.939 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:28:49.939 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:49.939 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:49.939 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:49.939 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:49.939 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:49.939 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:49.939 11:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.939 11:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:49.939 11:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.197 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.197 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.763 00:28:50.763 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:50.763 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:50.763 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:50.763 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.763 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:50.763 11:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:50.763 11:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:50.763 11:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:50.763 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:50.763 { 00:28:50.763 "cntlid": 91, 00:28:50.763 "qid": 0, 00:28:50.763 "state": "enabled", 00:28:50.763 "listen_address": { 00:28:50.763 "trtype": "TCP", 00:28:50.763 "adrfam": "IPv4", 00:28:50.763 "traddr": "10.0.0.2", 00:28:50.763 "trsvcid": "4420" 00:28:50.763 }, 00:28:50.763 "peer_address": { 00:28:50.763 "trtype": "TCP", 00:28:50.763 "adrfam": "IPv4", 00:28:50.763 "traddr": "10.0.0.1", 00:28:50.763 "trsvcid": "57088" 00:28:50.763 }, 00:28:50.763 "auth": { 00:28:50.763 "state": "completed", 00:28:50.763 "digest": "sha384", 00:28:50.763 "dhgroup": "ffdhe8192" 00:28:50.763 } 00:28:50.763 } 00:28:50.763 ]' 00:28:50.763 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:51.021 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:51.021 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:51.021 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:51.021 11:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:51.021 11:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:51.021 11:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:51.021 11:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:51.279 11:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:28:51.844 11:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:51.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:51.844 11:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:51.844 11:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.844 11:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:51.844 11:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.844 11:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:51.844 11:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:51.844 11:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:52.101 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:28:52.101 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:52.101 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:52.101 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:52.101 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:52.101 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:52.101 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.101 11:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.101 11:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:52.101 11:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.101 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.101 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.666 00:28:52.666 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:52.666 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:52.666 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:52.923 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.923 11:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:52.923 11:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:52.923 11:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:52.923 11:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:52.923 11:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:52.923 { 00:28:52.923 "cntlid": 93, 00:28:52.923 "qid": 0, 00:28:52.923 "state": "enabled", 00:28:52.923 "listen_address": { 00:28:52.923 "trtype": "TCP", 00:28:52.923 "adrfam": "IPv4", 00:28:52.923 "traddr": "10.0.0.2", 00:28:52.923 "trsvcid": "4420" 00:28:52.923 }, 00:28:52.923 "peer_address": { 00:28:52.923 "trtype": "TCP", 00:28:52.923 "adrfam": "IPv4", 00:28:52.923 "traddr": "10.0.0.1", 00:28:52.923 "trsvcid": "57118" 00:28:52.923 }, 00:28:52.923 "auth": { 00:28:52.923 "state": "completed", 00:28:52.923 "digest": "sha384", 00:28:52.923 "dhgroup": "ffdhe8192" 00:28:52.923 } 00:28:52.923 } 00:28:52.923 ]' 00:28:52.924 11:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:53.181 11:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:53.181 11:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:53.181 11:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:53.181 11:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:53.181 11:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:53.181 11:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:53.181 11:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:53.438 11:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:28:54.004 11:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:54.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:54.004 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:54.004 11:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.004 11:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:54.004 11:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.004 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:54.004 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:54.004 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:54.264 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:28:54.264 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:54.264 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:28:54.264 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:28:54.264 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:28:54.264 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:54.264 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:28:54.264 11:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.264 11:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:54.264 11:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.264 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:54.264 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:28:54.829 00:28:54.829 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:54.829 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:54.829 11:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:55.087 11:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.087 11:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:55.087 11:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:55.087 11:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:55.087 11:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:55.087 11:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:55.087 { 00:28:55.087 "cntlid": 95, 00:28:55.087 "qid": 0, 00:28:55.087 "state": "enabled", 00:28:55.087 "listen_address": { 00:28:55.087 "trtype": "TCP", 00:28:55.087 "adrfam": "IPv4", 00:28:55.087 "traddr": "10.0.0.2", 00:28:55.087 "trsvcid": "4420" 00:28:55.087 }, 00:28:55.087 "peer_address": { 00:28:55.087 "trtype": "TCP", 00:28:55.087 "adrfam": "IPv4", 00:28:55.087 "traddr": "10.0.0.1", 00:28:55.087 "trsvcid": "47098" 00:28:55.087 }, 00:28:55.087 "auth": { 00:28:55.087 "state": "completed", 00:28:55.087 "digest": "sha384", 00:28:55.087 "dhgroup": "ffdhe8192" 00:28:55.087 } 00:28:55.087 } 00:28:55.087 ]' 00:28:55.087 11:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:55.087 11:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:28:55.087 11:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:55.087 11:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:28:55.087 11:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:55.345 11:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:55.345 11:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:55.345 11:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:55.345 11:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:56.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:56.279 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:56.537 00:28:56.537 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:56.537 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:56.537 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:56.795 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.795 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:56.795 11:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:56.795 11:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:56.795 11:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:56.795 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:56.795 { 00:28:56.795 "cntlid": 97, 00:28:56.795 "qid": 0, 00:28:56.795 "state": "enabled", 00:28:56.795 "listen_address": { 00:28:56.795 "trtype": "TCP", 00:28:56.795 "adrfam": "IPv4", 00:28:56.795 "traddr": "10.0.0.2", 00:28:56.795 "trsvcid": "4420" 00:28:56.795 }, 00:28:56.795 "peer_address": { 00:28:56.795 "trtype": "TCP", 00:28:56.795 "adrfam": "IPv4", 00:28:56.795 "traddr": "10.0.0.1", 00:28:56.795 "trsvcid": "47118" 00:28:56.795 }, 00:28:56.795 "auth": { 00:28:56.795 "state": "completed", 00:28:56.795 "digest": "sha512", 00:28:56.795 "dhgroup": "null" 00:28:56.795 } 00:28:56.795 } 00:28:56.795 ]' 00:28:56.795 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:56.795 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:56.795 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:56.795 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:28:56.795 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:57.054 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:57.054 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:57.054 11:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:57.054 11:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:28:57.987 11:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:57.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:57.987 11:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:57.987 11:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:57.987 11:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:57.987 11:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:57.987 11:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:57.987 11:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:28:57.987 11:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:28:58.245 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:28:58.245 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:58.245 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:58.245 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:58.245 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:28:58.245 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:58.245 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:58.245 11:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.245 11:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:58.245 11:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.245 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:58.245 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:58.503 00:28:58.503 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:28:58.503 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:28:58.503 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:28:58.761 { 00:28:58.761 "cntlid": 99, 00:28:58.761 "qid": 0, 00:28:58.761 "state": "enabled", 00:28:58.761 "listen_address": { 00:28:58.761 "trtype": "TCP", 00:28:58.761 "adrfam": "IPv4", 00:28:58.761 "traddr": "10.0.0.2", 00:28:58.761 "trsvcid": "4420" 00:28:58.761 }, 00:28:58.761 "peer_address": { 00:28:58.761 "trtype": "TCP", 00:28:58.761 "adrfam": "IPv4", 00:28:58.761 "traddr": "10.0.0.1", 00:28:58.761 "trsvcid": "47144" 00:28:58.761 }, 00:28:58.761 "auth": { 00:28:58.761 "state": "completed", 00:28:58.761 "digest": "sha512", 00:28:58.761 "dhgroup": "null" 00:28:58.761 } 00:28:58.761 } 00:28:58.761 ]' 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:28:58.761 11:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:28:59.019 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:28:59.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:59.953 11:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:00.210 00:29:00.210 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:00.210 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:00.210 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:00.469 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.469 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:00.469 11:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:00.469 11:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:00.469 11:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:00.469 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:00.469 { 00:29:00.469 "cntlid": 101, 00:29:00.469 "qid": 0, 00:29:00.469 "state": "enabled", 00:29:00.469 "listen_address": { 00:29:00.469 "trtype": "TCP", 00:29:00.469 "adrfam": "IPv4", 00:29:00.469 "traddr": "10.0.0.2", 00:29:00.469 "trsvcid": "4420" 00:29:00.469 }, 00:29:00.469 "peer_address": { 00:29:00.469 "trtype": "TCP", 00:29:00.469 "adrfam": "IPv4", 00:29:00.469 "traddr": "10.0.0.1", 00:29:00.469 "trsvcid": "47172" 00:29:00.469 }, 00:29:00.469 "auth": { 00:29:00.469 "state": "completed", 00:29:00.469 "digest": "sha512", 00:29:00.469 "dhgroup": "null" 00:29:00.469 } 00:29:00.469 } 00:29:00.469 ]' 00:29:00.469 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:00.469 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:00.469 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:00.727 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:29:00.727 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:00.727 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:00.727 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:00.727 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:00.985 11:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:29:01.550 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:01.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:01.550 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:01.550 11:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.550 11:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:01.550 11:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.550 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:01.550 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:01.550 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:29:01.808 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:29:01.809 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:01.809 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:01.809 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:29:01.809 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:01.809 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:01.809 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:29:01.809 11:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.809 11:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:01.809 11:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.809 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:01.809 11:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:02.067 00:29:02.067 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:02.067 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:02.067 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:02.325 { 00:29:02.325 "cntlid": 103, 00:29:02.325 "qid": 0, 00:29:02.325 "state": "enabled", 00:29:02.325 "listen_address": { 00:29:02.325 "trtype": "TCP", 00:29:02.325 "adrfam": "IPv4", 00:29:02.325 "traddr": "10.0.0.2", 00:29:02.325 "trsvcid": "4420" 00:29:02.325 }, 00:29:02.325 "peer_address": { 00:29:02.325 "trtype": "TCP", 00:29:02.325 "adrfam": "IPv4", 00:29:02.325 "traddr": "10.0.0.1", 00:29:02.325 "trsvcid": "47186" 00:29:02.325 }, 00:29:02.325 "auth": { 00:29:02.325 "state": "completed", 00:29:02.325 "digest": "sha512", 00:29:02.325 "dhgroup": "null" 00:29:02.325 } 00:29:02.325 } 00:29:02.325 ]' 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:02.325 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:02.583 11:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:29:03.517 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:03.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:03.517 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:03.517 11:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.517 11:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:03.517 11:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.517 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:03.517 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:03.518 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:03.775 00:29:04.033 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:04.033 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:04.033 11:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:04.033 11:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.033 11:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:04.033 11:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.033 11:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:04.291 11:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.291 11:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:04.291 { 00:29:04.291 "cntlid": 105, 00:29:04.291 "qid": 0, 00:29:04.291 "state": "enabled", 00:29:04.291 "listen_address": { 00:29:04.291 "trtype": "TCP", 00:29:04.291 "adrfam": "IPv4", 00:29:04.291 "traddr": "10.0.0.2", 00:29:04.291 "trsvcid": "4420" 00:29:04.291 }, 00:29:04.291 "peer_address": { 00:29:04.291 "trtype": "TCP", 00:29:04.291 "adrfam": "IPv4", 00:29:04.291 "traddr": "10.0.0.1", 00:29:04.291 "trsvcid": "34628" 00:29:04.291 }, 00:29:04.291 "auth": { 00:29:04.291 "state": "completed", 00:29:04.291 "digest": "sha512", 00:29:04.291 "dhgroup": "ffdhe2048" 00:29:04.291 } 00:29:04.291 } 00:29:04.291 ]' 00:29:04.291 11:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:04.291 11:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:04.291 11:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:04.291 11:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:04.291 11:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:04.291 11:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:04.291 11:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:04.291 11:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:04.549 11:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:05.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.483 11:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.484 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:05.484 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:05.741 00:29:05.741 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:05.741 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:05.741 11:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:05.999 11:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.999 11:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:05.999 11:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:05.999 11:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:05.999 11:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:05.999 11:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:05.999 { 00:29:05.999 "cntlid": 107, 00:29:05.999 "qid": 0, 00:29:05.999 "state": "enabled", 00:29:05.999 "listen_address": { 00:29:05.999 "trtype": "TCP", 00:29:05.999 "adrfam": "IPv4", 00:29:05.999 "traddr": "10.0.0.2", 00:29:05.999 "trsvcid": "4420" 00:29:05.999 }, 00:29:05.999 "peer_address": { 00:29:05.999 "trtype": "TCP", 00:29:05.999 "adrfam": "IPv4", 00:29:05.999 "traddr": "10.0.0.1", 00:29:05.999 "trsvcid": "34646" 00:29:05.999 }, 00:29:05.999 "auth": { 00:29:05.999 "state": "completed", 00:29:05.999 "digest": "sha512", 00:29:05.999 "dhgroup": "ffdhe2048" 00:29:05.999 } 00:29:05.999 } 00:29:05.999 ]' 00:29:05.999 11:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:05.999 11:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:05.999 11:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:06.257 11:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:06.257 11:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:06.257 11:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:06.257 11:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:06.257 11:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:06.514 11:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:29:07.079 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:07.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:07.079 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:07.079 11:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.079 11:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:07.079 11:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.079 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:07.079 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:07.079 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:07.337 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:29:07.337 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:07.337 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:07.337 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:07.338 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:07.338 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:07.338 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:07.338 11:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.338 11:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:07.338 11:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.338 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:07.338 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:07.596 00:29:07.596 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:07.596 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:07.596 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:07.855 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.855 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:07.855 11:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:07.855 11:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:07.855 11:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:07.855 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:07.855 { 00:29:07.855 "cntlid": 109, 00:29:07.855 "qid": 0, 00:29:07.855 "state": "enabled", 00:29:07.855 "listen_address": { 00:29:07.855 "trtype": "TCP", 00:29:07.855 "adrfam": "IPv4", 00:29:07.855 "traddr": "10.0.0.2", 00:29:07.856 "trsvcid": "4420" 00:29:07.856 }, 00:29:07.856 "peer_address": { 00:29:07.856 "trtype": "TCP", 00:29:07.856 "adrfam": "IPv4", 00:29:07.856 "traddr": "10.0.0.1", 00:29:07.856 "trsvcid": "34666" 00:29:07.856 }, 00:29:07.856 "auth": { 00:29:07.856 "state": "completed", 00:29:07.856 "digest": "sha512", 00:29:07.856 "dhgroup": "ffdhe2048" 00:29:07.856 } 00:29:07.856 } 00:29:07.856 ]' 00:29:07.856 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:07.856 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:07.856 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:07.856 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:07.856 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:08.113 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:08.113 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:08.113 11:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:08.113 11:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:29:09.047 11:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:09.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:09.047 11:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:09.047 11:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.047 11:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.047 11:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.047 11:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:09.047 11:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:09.047 11:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:09.306 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:29:09.306 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:09.306 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:09.306 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:29:09.306 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:09.306 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:09.306 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:29:09.306 11:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.306 11:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.306 11:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.306 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:09.306 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:09.564 00:29:09.564 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:09.564 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:09.564 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:09.828 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.828 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:09.828 11:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:09.828 11:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:09.828 11:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:09.828 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:09.828 { 00:29:09.828 "cntlid": 111, 00:29:09.828 "qid": 0, 00:29:09.828 "state": "enabled", 00:29:09.828 "listen_address": { 00:29:09.828 "trtype": "TCP", 00:29:09.828 "adrfam": "IPv4", 00:29:09.828 "traddr": "10.0.0.2", 00:29:09.828 "trsvcid": "4420" 00:29:09.828 }, 00:29:09.828 "peer_address": { 00:29:09.828 "trtype": "TCP", 00:29:09.828 "adrfam": "IPv4", 00:29:09.828 "traddr": "10.0.0.1", 00:29:09.828 "trsvcid": "34698" 00:29:09.828 }, 00:29:09.828 "auth": { 00:29:09.828 "state": "completed", 00:29:09.828 "digest": "sha512", 00:29:09.828 "dhgroup": "ffdhe2048" 00:29:09.828 } 00:29:09.828 } 00:29:09.828 ]' 00:29:09.828 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:09.828 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:09.828 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:09.828 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:29:09.828 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:09.828 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:09.828 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:09.829 11:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:10.149 11:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:29:10.715 11:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:10.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:10.715 11:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:10.715 11:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.715 11:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:10.715 11:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.715 11:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:10.715 11:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:10.715 11:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:10.715 11:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:10.974 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:29:10.974 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:10.974 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:10.974 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:10.974 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:10.974 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:10.974 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:10.974 11:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:10.974 11:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:10.974 11:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:10.974 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:10.974 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:11.540 00:29:11.540 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:11.540 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:11.540 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:11.540 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.540 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:11.540 11:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.540 11:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:11.540 11:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.540 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:11.540 { 00:29:11.540 "cntlid": 113, 00:29:11.540 "qid": 0, 00:29:11.540 "state": "enabled", 00:29:11.540 "listen_address": { 00:29:11.540 "trtype": "TCP", 00:29:11.540 "adrfam": "IPv4", 00:29:11.540 "traddr": "10.0.0.2", 00:29:11.540 "trsvcid": "4420" 00:29:11.540 }, 00:29:11.540 "peer_address": { 00:29:11.540 "trtype": "TCP", 00:29:11.540 "adrfam": "IPv4", 00:29:11.540 "traddr": "10.0.0.1", 00:29:11.540 "trsvcid": "34728" 00:29:11.540 }, 00:29:11.540 "auth": { 00:29:11.540 "state": "completed", 00:29:11.540 "digest": "sha512", 00:29:11.540 "dhgroup": "ffdhe3072" 00:29:11.540 } 00:29:11.540 } 00:29:11.540 ]' 00:29:11.540 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:11.799 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:11.799 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:11.799 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:11.799 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:11.799 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:11.799 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:11.799 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:12.056 11:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:29:12.621 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:12.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:12.621 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:12.621 11:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.621 11:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:12.879 11:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.879 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:12.879 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:12.879 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:12.879 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:29:12.879 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:12.879 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:12.879 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:12.879 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:12.879 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:12.879 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:12.879 11:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.879 11:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:13.137 11:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.137 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:13.137 11:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:13.395 00:29:13.395 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:13.395 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:13.395 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:13.654 { 00:29:13.654 "cntlid": 115, 00:29:13.654 "qid": 0, 00:29:13.654 "state": "enabled", 00:29:13.654 "listen_address": { 00:29:13.654 "trtype": "TCP", 00:29:13.654 "adrfam": "IPv4", 00:29:13.654 "traddr": "10.0.0.2", 00:29:13.654 "trsvcid": "4420" 00:29:13.654 }, 00:29:13.654 "peer_address": { 00:29:13.654 "trtype": "TCP", 00:29:13.654 "adrfam": "IPv4", 00:29:13.654 "traddr": "10.0.0.1", 00:29:13.654 "trsvcid": "45648" 00:29:13.654 }, 00:29:13.654 "auth": { 00:29:13.654 "state": "completed", 00:29:13.654 "digest": "sha512", 00:29:13.654 "dhgroup": "ffdhe3072" 00:29:13.654 } 00:29:13.654 } 00:29:13.654 ]' 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:13.654 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:13.912 11:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:14.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:14.847 11:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:15.106 00:29:15.365 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:15.365 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:15.365 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:15.365 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.365 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:15.365 11:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:15.365 11:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:15.365 11:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.365 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:15.365 { 00:29:15.365 "cntlid": 117, 00:29:15.365 "qid": 0, 00:29:15.365 "state": "enabled", 00:29:15.365 "listen_address": { 00:29:15.365 "trtype": "TCP", 00:29:15.365 "adrfam": "IPv4", 00:29:15.365 "traddr": "10.0.0.2", 00:29:15.365 "trsvcid": "4420" 00:29:15.365 }, 00:29:15.365 "peer_address": { 00:29:15.365 "trtype": "TCP", 00:29:15.365 "adrfam": "IPv4", 00:29:15.365 "traddr": "10.0.0.1", 00:29:15.365 "trsvcid": "45670" 00:29:15.365 }, 00:29:15.365 "auth": { 00:29:15.365 "state": "completed", 00:29:15.365 "digest": "sha512", 00:29:15.365 "dhgroup": "ffdhe3072" 00:29:15.365 } 00:29:15.365 } 00:29:15.365 ]' 00:29:15.365 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:15.624 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:15.624 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:15.624 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:15.624 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:15.624 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:15.624 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:15.624 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:15.882 11:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:29:16.448 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:16.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:16.448 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:16.448 11:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.448 11:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:16.448 11:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.449 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:16.449 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:16.449 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:16.707 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:29:16.707 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:16.707 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:16.707 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:29:16.707 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:16.707 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:16.707 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:29:16.707 11:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:16.707 11:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:16.707 11:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:16.707 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:16.707 11:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:16.966 00:29:16.966 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:16.966 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:16.966 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:17.234 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.234 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:17.234 11:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:17.234 11:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.234 11:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:17.234 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:17.234 { 00:29:17.234 "cntlid": 119, 00:29:17.234 "qid": 0, 00:29:17.234 "state": "enabled", 00:29:17.234 "listen_address": { 00:29:17.234 "trtype": "TCP", 00:29:17.234 "adrfam": "IPv4", 00:29:17.234 "traddr": "10.0.0.2", 00:29:17.234 "trsvcid": "4420" 00:29:17.234 }, 00:29:17.234 "peer_address": { 00:29:17.234 "trtype": "TCP", 00:29:17.234 "adrfam": "IPv4", 00:29:17.234 "traddr": "10.0.0.1", 00:29:17.234 "trsvcid": "45694" 00:29:17.234 }, 00:29:17.234 "auth": { 00:29:17.234 "state": "completed", 00:29:17.234 "digest": "sha512", 00:29:17.234 "dhgroup": "ffdhe3072" 00:29:17.234 } 00:29:17.234 } 00:29:17.234 ]' 00:29:17.234 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:17.235 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:17.235 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:17.496 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:29:17.496 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:17.496 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:17.496 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:17.496 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:17.754 11:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:29:18.321 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:18.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:18.321 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:18.321 11:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.321 11:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:18.321 11:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.321 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:18.321 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:18.321 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:18.321 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:18.580 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:29:18.580 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:18.580 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:18.580 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:18.580 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:18.580 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:18.580 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:18.580 11:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:18.580 11:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:18.580 11:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.580 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:18.580 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:18.839 00:29:18.839 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:18.839 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:18.839 11:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:19.097 11:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.097 11:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:19.097 11:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:19.097 11:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:19.097 11:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:19.097 11:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:19.097 { 00:29:19.097 "cntlid": 121, 00:29:19.097 "qid": 0, 00:29:19.097 "state": "enabled", 00:29:19.097 "listen_address": { 00:29:19.097 "trtype": "TCP", 00:29:19.097 "adrfam": "IPv4", 00:29:19.097 "traddr": "10.0.0.2", 00:29:19.097 "trsvcid": "4420" 00:29:19.097 }, 00:29:19.097 "peer_address": { 00:29:19.097 "trtype": "TCP", 00:29:19.097 "adrfam": "IPv4", 00:29:19.097 "traddr": "10.0.0.1", 00:29:19.097 "trsvcid": "45736" 00:29:19.097 }, 00:29:19.097 "auth": { 00:29:19.097 "state": "completed", 00:29:19.097 "digest": "sha512", 00:29:19.097 "dhgroup": "ffdhe4096" 00:29:19.097 } 00:29:19.097 } 00:29:19.097 ]' 00:29:19.097 11:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:19.356 11:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:19.356 11:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:19.356 11:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:19.356 11:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:19.356 11:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:19.356 11:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:19.356 11:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:19.614 11:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:29:20.180 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:20.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:20.180 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:20.180 11:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.180 11:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:20.180 11:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.180 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:20.180 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:20.180 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:20.438 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:29:20.438 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:20.438 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:20.438 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:20.438 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:20.438 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:20.438 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.438 11:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.438 11:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:20.438 11:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.438 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.438 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:20.697 00:29:20.697 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:20.697 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:20.697 11:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:20.955 11:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:20.955 11:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:20.955 11:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.955 11:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:20.955 11:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:20.955 11:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:20.955 { 00:29:20.955 "cntlid": 123, 00:29:20.955 "qid": 0, 00:29:20.955 "state": "enabled", 00:29:20.955 "listen_address": { 00:29:20.955 "trtype": "TCP", 00:29:20.955 "adrfam": "IPv4", 00:29:20.955 "traddr": "10.0.0.2", 00:29:20.955 "trsvcid": "4420" 00:29:20.955 }, 00:29:20.955 "peer_address": { 00:29:20.955 "trtype": "TCP", 00:29:20.955 "adrfam": "IPv4", 00:29:20.955 "traddr": "10.0.0.1", 00:29:20.955 "trsvcid": "45754" 00:29:20.955 }, 00:29:20.955 "auth": { 00:29:20.955 "state": "completed", 00:29:20.955 "digest": "sha512", 00:29:20.955 "dhgroup": "ffdhe4096" 00:29:20.955 } 00:29:20.955 } 00:29:20.955 ]' 00:29:20.955 11:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:21.213 11:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:21.213 11:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:21.213 11:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:21.213 11:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:21.213 11:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:21.213 11:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:21.213 11:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:21.472 11:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:29:22.038 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:22.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:22.038 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:22.038 11:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.038 11:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:22.038 11:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.038 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:22.038 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:22.038 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:22.296 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:29:22.296 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:22.296 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:22.296 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:22.296 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:22.296 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:22.296 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:22.296 11:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.296 11:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:22.296 11:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.296 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:22.296 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:22.863 00:29:22.863 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:22.863 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:22.863 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:22.863 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:22.863 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:22.863 11:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:22.863 11:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:22.863 11:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:22.863 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:22.863 { 00:29:22.863 "cntlid": 125, 00:29:22.863 "qid": 0, 00:29:22.863 "state": "enabled", 00:29:22.863 "listen_address": { 00:29:22.863 "trtype": "TCP", 00:29:22.863 "adrfam": "IPv4", 00:29:22.863 "traddr": "10.0.0.2", 00:29:22.863 "trsvcid": "4420" 00:29:22.863 }, 00:29:22.863 "peer_address": { 00:29:22.863 "trtype": "TCP", 00:29:22.863 "adrfam": "IPv4", 00:29:22.863 "traddr": "10.0.0.1", 00:29:22.863 "trsvcid": "45794" 00:29:22.863 }, 00:29:22.863 "auth": { 00:29:22.863 "state": "completed", 00:29:22.863 "digest": "sha512", 00:29:22.863 "dhgroup": "ffdhe4096" 00:29:22.863 } 00:29:22.863 } 00:29:22.863 ]' 00:29:22.863 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:23.122 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:23.122 11:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:23.122 11:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:23.122 11:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:23.122 11:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:23.122 11:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:23.122 11:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:23.380 11:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:29:23.946 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:23.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:23.946 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:23.946 11:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:23.946 11:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:23.946 11:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:23.946 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:23.946 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:23.946 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:24.204 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:29:24.204 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:24.204 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:24.204 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:29:24.204 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:24.204 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:24.204 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:29:24.204 11:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.204 11:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.204 11:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.204 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:24.204 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:24.771 00:29:24.771 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:24.771 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:24.771 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:24.772 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:24.772 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:24.772 11:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.772 11:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.772 11:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.772 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:24.772 { 00:29:24.772 "cntlid": 127, 00:29:24.772 "qid": 0, 00:29:24.772 "state": "enabled", 00:29:24.772 "listen_address": { 00:29:24.772 "trtype": "TCP", 00:29:24.772 "adrfam": "IPv4", 00:29:24.772 "traddr": "10.0.0.2", 00:29:24.772 "trsvcid": "4420" 00:29:24.772 }, 00:29:24.772 "peer_address": { 00:29:24.772 "trtype": "TCP", 00:29:24.772 "adrfam": "IPv4", 00:29:24.772 "traddr": "10.0.0.1", 00:29:24.772 "trsvcid": "36978" 00:29:24.772 }, 00:29:24.772 "auth": { 00:29:24.772 "state": "completed", 00:29:24.772 "digest": "sha512", 00:29:24.772 "dhgroup": "ffdhe4096" 00:29:24.772 } 00:29:24.772 } 00:29:24.772 ]' 00:29:24.772 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:25.030 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:25.030 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:25.030 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:29:25.030 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:25.030 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:25.030 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:25.030 11:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:25.288 11:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:29:25.854 11:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:25.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:25.854 11:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:25.854 11:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.854 11:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:25.854 11:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.854 11:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:25.854 11:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:25.854 11:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:25.854 11:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:26.112 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:29:26.112 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:26.112 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:26.112 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:26.112 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:26.112 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:26.112 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:26.112 11:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.112 11:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.112 11:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.112 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:26.112 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:26.678 00:29:26.678 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:26.678 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:26.678 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:26.936 { 00:29:26.936 "cntlid": 129, 00:29:26.936 "qid": 0, 00:29:26.936 "state": "enabled", 00:29:26.936 "listen_address": { 00:29:26.936 "trtype": "TCP", 00:29:26.936 "adrfam": "IPv4", 00:29:26.936 "traddr": "10.0.0.2", 00:29:26.936 "trsvcid": "4420" 00:29:26.936 }, 00:29:26.936 "peer_address": { 00:29:26.936 "trtype": "TCP", 00:29:26.936 "adrfam": "IPv4", 00:29:26.936 "traddr": "10.0.0.1", 00:29:26.936 "trsvcid": "37004" 00:29:26.936 }, 00:29:26.936 "auth": { 00:29:26.936 "state": "completed", 00:29:26.936 "digest": "sha512", 00:29:26.936 "dhgroup": "ffdhe6144" 00:29:26.936 } 00:29:26.936 } 00:29:26.936 ]' 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:26.936 11:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:27.195 11:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:29:28.129 11:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:28.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:28.129 11:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:28.129 11:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.129 11:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.129 11:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.130 11:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:28.130 11:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:28.130 11:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:28.130 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:29:28.130 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:28.130 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:28.130 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:28.130 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:28.130 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:28.130 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:28.130 11:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.130 11:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.130 11:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.130 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:28.130 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:28.694 00:29:28.694 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:28.694 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:28.694 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:28.950 { 00:29:28.950 "cntlid": 131, 00:29:28.950 "qid": 0, 00:29:28.950 "state": "enabled", 00:29:28.950 "listen_address": { 00:29:28.950 "trtype": "TCP", 00:29:28.950 "adrfam": "IPv4", 00:29:28.950 "traddr": "10.0.0.2", 00:29:28.950 "trsvcid": "4420" 00:29:28.950 }, 00:29:28.950 "peer_address": { 00:29:28.950 "trtype": "TCP", 00:29:28.950 "adrfam": "IPv4", 00:29:28.950 "traddr": "10.0.0.1", 00:29:28.950 "trsvcid": "37026" 00:29:28.950 }, 00:29:28.950 "auth": { 00:29:28.950 "state": "completed", 00:29:28.950 "digest": "sha512", 00:29:28.950 "dhgroup": "ffdhe6144" 00:29:28.950 } 00:29:28.950 } 00:29:28.950 ]' 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:28.950 11:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:29.205 11:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:29:30.135 11:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:30.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:30.136 11:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:30.136 11:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.136 11:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.136 11:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.136 11:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:30.136 11:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:30.136 11:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:30.136 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:29:30.136 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:30.136 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:30.136 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:30.136 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:30.136 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:30.136 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:30.136 11:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:30.136 11:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:30.136 11:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:30.136 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:30.136 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:30.747 00:29:30.747 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:30.747 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:30.747 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:31.018 { 00:29:31.018 "cntlid": 133, 00:29:31.018 "qid": 0, 00:29:31.018 "state": "enabled", 00:29:31.018 "listen_address": { 00:29:31.018 "trtype": "TCP", 00:29:31.018 "adrfam": "IPv4", 00:29:31.018 "traddr": "10.0.0.2", 00:29:31.018 "trsvcid": "4420" 00:29:31.018 }, 00:29:31.018 "peer_address": { 00:29:31.018 "trtype": "TCP", 00:29:31.018 "adrfam": "IPv4", 00:29:31.018 "traddr": "10.0.0.1", 00:29:31.018 "trsvcid": "37050" 00:29:31.018 }, 00:29:31.018 "auth": { 00:29:31.018 "state": "completed", 00:29:31.018 "digest": "sha512", 00:29:31.018 "dhgroup": "ffdhe6144" 00:29:31.018 } 00:29:31.018 } 00:29:31.018 ]' 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:31.018 11:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:31.275 11:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:29:31.841 11:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:32.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:32.099 11:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:32.099 11:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.099 11:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.099 11:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.099 11:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:32.099 11:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:32.099 11:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:32.099 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:29:32.099 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:32.099 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:32.099 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:29:32.099 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:32.099 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:32.099 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:29:32.099 11:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.099 11:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.099 11:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.099 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:32.099 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:32.664 00:29:32.664 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:32.664 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:32.664 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:32.921 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:32.921 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:32.921 11:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.921 11:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:32.921 11:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.921 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:32.921 { 00:29:32.921 "cntlid": 135, 00:29:32.921 "qid": 0, 00:29:32.921 "state": "enabled", 00:29:32.921 "listen_address": { 00:29:32.921 "trtype": "TCP", 00:29:32.921 "adrfam": "IPv4", 00:29:32.921 "traddr": "10.0.0.2", 00:29:32.922 "trsvcid": "4420" 00:29:32.922 }, 00:29:32.922 "peer_address": { 00:29:32.922 "trtype": "TCP", 00:29:32.922 "adrfam": "IPv4", 00:29:32.922 "traddr": "10.0.0.1", 00:29:32.922 "trsvcid": "37068" 00:29:32.922 }, 00:29:32.922 "auth": { 00:29:32.922 "state": "completed", 00:29:32.922 "digest": "sha512", 00:29:32.922 "dhgroup": "ffdhe6144" 00:29:32.922 } 00:29:32.922 } 00:29:32.922 ]' 00:29:32.922 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:32.922 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:32.922 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:32.922 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:29:32.922 11:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:32.922 11:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:32.922 11:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:32.922 11:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:33.179 11:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:29:34.114 11:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:34.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:34.114 11:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:34.114 11:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.114 11:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.114 11:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.114 11:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:29:34.114 11:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:34.114 11:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:34.114 11:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:34.114 11:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:29:34.114 11:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:34.114 11:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:34.114 11:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:34.114 11:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:34.114 11:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:34.114 11:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:34.114 11:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.114 11:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:34.114 11:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:34.114 11:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:34.114 11:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:34.682 00:29:34.940 11:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:34.940 11:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:34.940 11:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:34.940 11:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.940 11:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:34.940 11:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:34.940 11:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:35.198 11:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:35.198 11:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:35.198 { 00:29:35.198 "cntlid": 137, 00:29:35.198 "qid": 0, 00:29:35.198 "state": "enabled", 00:29:35.198 "listen_address": { 00:29:35.198 "trtype": "TCP", 00:29:35.198 "adrfam": "IPv4", 00:29:35.198 "traddr": "10.0.0.2", 00:29:35.198 "trsvcid": "4420" 00:29:35.198 }, 00:29:35.198 "peer_address": { 00:29:35.198 "trtype": "TCP", 00:29:35.198 "adrfam": "IPv4", 00:29:35.198 "traddr": "10.0.0.1", 00:29:35.198 "trsvcid": "34016" 00:29:35.198 }, 00:29:35.198 "auth": { 00:29:35.198 "state": "completed", 00:29:35.198 "digest": "sha512", 00:29:35.198 "dhgroup": "ffdhe8192" 00:29:35.198 } 00:29:35.198 } 00:29:35.198 ]' 00:29:35.198 11:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:35.198 11:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:35.198 11:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:35.198 11:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:35.198 11:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:35.198 11:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:35.198 11:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:35.198 11:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:35.456 11:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:29:36.019 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:36.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:36.019 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:36.019 11:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.019 11:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.019 11:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.019 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:36.019 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:36.019 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:36.276 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:29:36.276 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:36.276 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:36.276 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:36.276 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:29:36.276 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:36.276 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.276 11:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.276 11:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.276 11:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.276 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:36.276 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:37.206 00:29:37.207 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:37.207 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:37.207 11:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:37.207 11:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.207 11:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:37.207 11:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.207 11:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:37.207 11:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.207 11:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:37.207 { 00:29:37.207 "cntlid": 139, 00:29:37.207 "qid": 0, 00:29:37.207 "state": "enabled", 00:29:37.207 "listen_address": { 00:29:37.207 "trtype": "TCP", 00:29:37.207 "adrfam": "IPv4", 00:29:37.207 "traddr": "10.0.0.2", 00:29:37.207 "trsvcid": "4420" 00:29:37.207 }, 00:29:37.207 "peer_address": { 00:29:37.207 "trtype": "TCP", 00:29:37.207 "adrfam": "IPv4", 00:29:37.207 "traddr": "10.0.0.1", 00:29:37.207 "trsvcid": "34046" 00:29:37.207 }, 00:29:37.207 "auth": { 00:29:37.207 "state": "completed", 00:29:37.207 "digest": "sha512", 00:29:37.207 "dhgroup": "ffdhe8192" 00:29:37.207 } 00:29:37.207 } 00:29:37.207 ]' 00:29:37.207 11:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:37.207 11:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:37.207 11:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:37.464 11:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:37.464 11:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:37.464 11:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:37.464 11:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:37.464 11:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:37.722 11:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:01:NDI2NTczNDFhMzJkNTdhNWVjMzI4ODQzZjhkZjEzYmGhVSKZ: --dhchap-ctrl-secret DHHC-1:02:OTk0MzgzMWMxNTRlYmZmNGU2MTJlOTFiZGM2OWJhZmE0YTUyOWNkYzFiYzgwNmYw61dDjw==: 00:29:38.288 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:38.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:38.288 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:38.288 11:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.288 11:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.288 11:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.288 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:38.288 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:38.288 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:38.546 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:29:38.546 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:38.546 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:38.546 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:38.546 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:29:38.546 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:38.546 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.546 11:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.546 11:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:38.546 11:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.546 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:38.546 11:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:39.111 00:29:39.111 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:39.111 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:39.111 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:39.369 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:39.369 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:39.369 11:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:39.369 11:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:39.369 11:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:39.369 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:39.369 { 00:29:39.369 "cntlid": 141, 00:29:39.369 "qid": 0, 00:29:39.369 "state": "enabled", 00:29:39.369 "listen_address": { 00:29:39.369 "trtype": "TCP", 00:29:39.369 "adrfam": "IPv4", 00:29:39.369 "traddr": "10.0.0.2", 00:29:39.369 "trsvcid": "4420" 00:29:39.369 }, 00:29:39.369 "peer_address": { 00:29:39.369 "trtype": "TCP", 00:29:39.369 "adrfam": "IPv4", 00:29:39.369 "traddr": "10.0.0.1", 00:29:39.369 "trsvcid": "34078" 00:29:39.369 }, 00:29:39.369 "auth": { 00:29:39.369 "state": "completed", 00:29:39.369 "digest": "sha512", 00:29:39.369 "dhgroup": "ffdhe8192" 00:29:39.369 } 00:29:39.369 } 00:29:39.369 ]' 00:29:39.369 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:39.369 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:39.369 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:39.627 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:39.627 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:39.627 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:39.627 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:39.627 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:39.884 11:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:02:NWY4YzA4YWI2YjU5MDQ1YzQ5YWJhNTZlOWE0NjRkNWJlN2JiZTZkZmYyMTk0ZWIygAfUsw==: --dhchap-ctrl-secret DHHC-1:01:ZmYzNjMyOWJlZmEwYWRiMzcxYWIwZGU5MGI1MjUxMGQxwcEM: 00:29:40.450 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:40.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:40.450 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:40.450 11:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.450 11:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:40.450 11:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.450 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:29:40.450 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:40.450 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:40.708 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:29:40.708 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:40.708 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:40.708 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:40.708 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:40.708 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:40.708 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:29:40.708 11:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:40.708 11:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:40.708 11:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:40.708 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:40.708 11:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:41.276 00:29:41.276 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:41.276 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:41.276 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:41.535 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:41.535 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:41.535 11:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.535 11:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:41.535 11:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.535 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:41.535 { 00:29:41.535 "cntlid": 143, 00:29:41.535 "qid": 0, 00:29:41.535 "state": "enabled", 00:29:41.535 "listen_address": { 00:29:41.535 "trtype": "TCP", 00:29:41.535 "adrfam": "IPv4", 00:29:41.535 "traddr": "10.0.0.2", 00:29:41.535 "trsvcid": "4420" 00:29:41.535 }, 00:29:41.535 "peer_address": { 00:29:41.535 "trtype": "TCP", 00:29:41.535 "adrfam": "IPv4", 00:29:41.535 "traddr": "10.0.0.1", 00:29:41.535 "trsvcid": "34112" 00:29:41.535 }, 00:29:41.535 "auth": { 00:29:41.535 "state": "completed", 00:29:41.535 "digest": "sha512", 00:29:41.535 "dhgroup": "ffdhe8192" 00:29:41.535 } 00:29:41.535 } 00:29:41.535 ]' 00:29:41.535 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:41.535 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:41.794 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:41.794 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:41.794 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:41.794 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:41.794 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:41.794 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:42.052 11:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:29:42.623 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:42.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:42.623 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:42.623 11:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.623 11:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:42.623 11:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.623 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:29:42.623 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:29:42.623 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:29:42.623 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:42.623 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:42.623 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:42.881 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:29:42.881 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:42.881 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:42.881 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:42.882 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:29:42.882 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:42.882 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:42.882 11:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:42.882 11:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:42.882 11:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:42.882 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:42.882 11:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:43.467 00:29:43.467 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:43.467 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:43.467 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:43.727 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:43.727 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:43.727 11:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.727 11:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:43.727 11:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.727 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:43.727 { 00:29:43.727 "cntlid": 145, 00:29:43.727 "qid": 0, 00:29:43.727 "state": "enabled", 00:29:43.727 "listen_address": { 00:29:43.727 "trtype": "TCP", 00:29:43.727 "adrfam": "IPv4", 00:29:43.727 "traddr": "10.0.0.2", 00:29:43.727 "trsvcid": "4420" 00:29:43.727 }, 00:29:43.727 "peer_address": { 00:29:43.727 "trtype": "TCP", 00:29:43.727 "adrfam": "IPv4", 00:29:43.727 "traddr": "10.0.0.1", 00:29:43.727 "trsvcid": "43226" 00:29:43.727 }, 00:29:43.727 "auth": { 00:29:43.727 "state": "completed", 00:29:43.727 "digest": "sha512", 00:29:43.727 "dhgroup": "ffdhe8192" 00:29:43.727 } 00:29:43.727 } 00:29:43.727 ]' 00:29:43.727 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:43.986 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:43.986 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:43.986 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:43.986 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:43.986 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:43.986 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:43.986 11:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:44.245 11:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRhZjA3M2Y5N2E0MDMxYTc5YmZiYTc2N2I0Y2ZlYmQ0MTYzZjY5OGM1YjA3OTI0KLBodg==: --dhchap-ctrl-secret DHHC-1:03:NGRjZTc3MmExZDRiNzNmMWZlMTY0MTE1NzBlYWQ4MjIxMmFiMmVlN2Y2MjA3MDQ5YWQ3ZDNhN2Y3ZjBhMGRhNNYwHS4=: 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:44.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:44.812 11:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:29:45.380 request: 00:29:45.380 { 00:29:45.380 "name": "nvme0", 00:29:45.380 "trtype": "tcp", 00:29:45.380 "traddr": "10.0.0.2", 00:29:45.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:29:45.380 "adrfam": "ipv4", 00:29:45.380 "trsvcid": "4420", 00:29:45.380 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:45.380 "dhchap_key": "key2", 00:29:45.380 "method": "bdev_nvme_attach_controller", 00:29:45.380 "req_id": 1 00:29:45.380 } 00:29:45.380 Got JSON-RPC error response 00:29:45.380 response: 00:29:45.380 { 00:29:45.380 "code": -5, 00:29:45.380 "message": "Input/output error" 00:29:45.380 } 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:45.380 11:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:45.947 request: 00:29:45.947 { 00:29:45.947 "name": "nvme0", 00:29:45.947 "trtype": "tcp", 00:29:45.947 "traddr": "10.0.0.2", 00:29:45.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:29:45.947 "adrfam": "ipv4", 00:29:45.947 "trsvcid": "4420", 00:29:45.947 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:45.947 "dhchap_key": "key1", 00:29:45.947 "dhchap_ctrlr_key": "ckey2", 00:29:45.947 "method": "bdev_nvme_attach_controller", 00:29:45.947 "req_id": 1 00:29:45.947 } 00:29:45.947 Got JSON-RPC error response 00:29:45.947 response: 00:29:45.947 { 00:29:45.947 "code": -5, 00:29:45.947 "message": "Input/output error" 00:29:45.947 } 00:29:45.947 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:29:45.947 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:45.947 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:45.947 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:45.947 11:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:45.947 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.947 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key1 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:46.206 11:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:46.774 request: 00:29:46.774 { 00:29:46.774 "name": "nvme0", 00:29:46.774 "trtype": "tcp", 00:29:46.774 "traddr": "10.0.0.2", 00:29:46.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:29:46.774 "adrfam": "ipv4", 00:29:46.774 "trsvcid": "4420", 00:29:46.774 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:46.774 "dhchap_key": "key1", 00:29:46.774 "dhchap_ctrlr_key": "ckey1", 00:29:46.774 "method": "bdev_nvme_attach_controller", 00:29:46.774 "req_id": 1 00:29:46.774 } 00:29:46.774 Got JSON-RPC error response 00:29:46.774 response: 00:29:46.774 { 00:29:46.774 "code": -5, 00:29:46.774 "message": "Input/output error" 00:29:46.774 } 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3979565 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 3979565 ']' 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 3979565 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3979565 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3979565' 00:29:46.774 killing process with pid 3979565 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 3979565 00:29:46.774 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 3979565 00:29:47.040 11:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:29:47.040 11:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:47.040 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:47.040 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.040 11:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4005847 00:29:47.040 11:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4005847 00:29:47.040 11:38:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:29:47.040 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 4005847 ']' 00:29:47.040 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.040 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:47.040 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.040 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:47.040 11:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 4005847 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 4005847 ']' 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:47.978 11:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:48.237 11:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:48.804 00:29:48.804 11:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:29:48.804 11:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:29:48.804 11:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:49.063 11:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.063 11:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:29:49.063 11:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.063 11:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:49.063 11:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.063 11:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:29:49.063 { 00:29:49.063 "cntlid": 1, 00:29:49.063 "qid": 0, 00:29:49.063 "state": "enabled", 00:29:49.063 "listen_address": { 00:29:49.063 "trtype": "TCP", 00:29:49.063 "adrfam": "IPv4", 00:29:49.063 "traddr": "10.0.0.2", 00:29:49.063 "trsvcid": "4420" 00:29:49.063 }, 00:29:49.063 "peer_address": { 00:29:49.063 "trtype": "TCP", 00:29:49.063 "adrfam": "IPv4", 00:29:49.063 "traddr": "10.0.0.1", 00:29:49.063 "trsvcid": "43258" 00:29:49.063 }, 00:29:49.063 "auth": { 00:29:49.063 "state": "completed", 00:29:49.063 "digest": "sha512", 00:29:49.063 "dhgroup": "ffdhe8192" 00:29:49.063 } 00:29:49.063 } 00:29:49.063 ]' 00:29:49.063 11:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:29:49.063 11:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:29:49.063 11:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:29:49.322 11:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:29:49.322 11:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:29:49.322 11:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:29:49.322 11:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:49.322 11:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:49.581 11:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid 809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-secret DHHC-1:03:N2Q1MzgyY2JlZjFkNWM5ZDI1NTdiMWM2ZDY4M2E0ZDBjODY5ODUwMDVkYzNhNzI5MzE1YmUxMDgxNWI3ZTY0YpA8bjM=: 00:29:50.148 11:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:29:50.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:29:50.148 11:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:50.148 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:50.148 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:50.148 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:50.148 11:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --dhchap-key key3 00:29:50.148 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:50.148 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:50.148 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:50.148 11:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:29:50.148 11:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:29:50.406 11:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:50.406 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:29:50.406 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:50.406 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:29:50.406 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:50.407 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:29:50.407 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:50.407 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:50.407 11:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:50.665 request: 00:29:50.665 { 00:29:50.665 "name": "nvme0", 00:29:50.665 "trtype": "tcp", 00:29:50.665 "traddr": "10.0.0.2", 00:29:50.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:29:50.665 "adrfam": "ipv4", 00:29:50.665 "trsvcid": "4420", 00:29:50.665 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:50.665 "dhchap_key": "key3", 00:29:50.665 "method": "bdev_nvme_attach_controller", 00:29:50.665 "req_id": 1 00:29:50.665 } 00:29:50.665 Got JSON-RPC error response 00:29:50.665 response: 00:29:50.665 { 00:29:50.665 "code": -5, 00:29:50.665 "message": "Input/output error" 00:29:50.665 } 00:29:50.665 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:29:50.665 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:50.665 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:50.665 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:50.665 11:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:29:50.665 11:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:29:50.665 11:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:29:50.665 11:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:29:50.926 11:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:50.926 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:29:50.926 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:50.926 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:29:50.926 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:50.926 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:29:50.926 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:50.926 11:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:50.926 11:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:29:51.254 request: 00:29:51.254 { 00:29:51.254 "name": "nvme0", 00:29:51.254 "trtype": "tcp", 00:29:51.254 "traddr": "10.0.0.2", 00:29:51.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:29:51.254 "adrfam": "ipv4", 00:29:51.254 "trsvcid": "4420", 00:29:51.254 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:51.254 "dhchap_key": "key3", 00:29:51.254 "method": "bdev_nvme_attach_controller", 00:29:51.254 "req_id": 1 00:29:51.254 } 00:29:51.254 Got JSON-RPC error response 00:29:51.254 response: 00:29:51.254 { 00:29:51.254 "code": -5, 00:29:51.254 "message": "Input/output error" 00:29:51.254 } 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:29:51.254 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:29:51.514 request: 00:29:51.514 { 00:29:51.514 "name": "nvme0", 00:29:51.514 "trtype": "tcp", 00:29:51.514 "traddr": "10.0.0.2", 00:29:51.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562", 00:29:51.514 "adrfam": "ipv4", 00:29:51.514 "trsvcid": "4420", 00:29:51.514 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:29:51.514 "dhchap_key": "key0", 00:29:51.514 "dhchap_ctrlr_key": "key1", 00:29:51.514 "method": "bdev_nvme_attach_controller", 00:29:51.514 "req_id": 1 00:29:51.514 } 00:29:51.514 Got JSON-RPC error response 00:29:51.514 response: 00:29:51.514 { 00:29:51.514 "code": -5, 00:29:51.514 "message": "Input/output error" 00:29:51.514 } 00:29:51.514 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:29:51.514 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:51.514 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:51.514 11:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:51.514 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:51.514 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:29:51.772 00:29:51.772 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:29:51.772 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:29:51.772 11:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:29:52.031 11:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.031 11:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:29:52.031 11:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:29:52.289 11:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:29:52.289 11:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:29:52.289 11:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3979840 00:29:52.289 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 3979840 ']' 00:29:52.289 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 3979840 00:29:52.289 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:29:52.289 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:52.290 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3979840 00:29:52.290 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:29:52.290 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:29:52.290 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3979840' 00:29:52.290 killing process with pid 3979840 00:29:52.290 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 3979840 00:29:52.290 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 3979840 00:29:52.548 11:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:29:52.548 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:52.548 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:29:52.548 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:52.548 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:29:52.548 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:52.548 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:52.808 rmmod nvme_tcp 00:29:52.808 rmmod nvme_fabrics 00:29:52.808 rmmod nvme_keyring 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 4005847 ']' 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 4005847 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 4005847 ']' 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 4005847 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4005847 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4005847' 00:29:52.808 killing process with pid 4005847 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 4005847 00:29:52.808 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 4005847 00:29:53.066 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:53.066 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:53.066 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:53.067 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:53.067 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:53.067 11:38:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.067 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:53.067 11:38:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.970 11:38:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:54.970 11:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.yzU /tmp/spdk.key-sha256.LpM /tmp/spdk.key-sha384.wPc /tmp/spdk.key-sha512.uyW /tmp/spdk.key-sha512.D0U /tmp/spdk.key-sha384.IMB /tmp/spdk.key-sha256.q0f '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:29:54.970 00:29:54.970 real 2m44.133s 00:29:54.970 user 6m6.687s 00:29:54.970 sys 0m34.642s 00:29:54.970 11:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:54.970 11:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:29:54.970 ************************************ 00:29:54.970 END TEST nvmf_auth_target 00:29:54.970 ************************************ 00:29:55.229 11:38:20 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:29:55.229 11:38:20 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:29:55.229 11:38:20 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:29:55.229 11:38:20 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:55.229 11:38:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.229 ************************************ 00:29:55.229 START TEST nvmf_bdevio_no_huge 00:29:55.229 ************************************ 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:29:55.229 * Looking for test storage... 00:29:55.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:55.229 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.230 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:55.230 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.230 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:55.230 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:55.230 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:29:55.230 11:38:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:05.205 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:05.205 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:05.205 Found net devices under 0000:af:00.0: cvl_0_0 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:05.205 Found net devices under 0000:af:00.1: cvl_0_1 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.205 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.206 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.206 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.206 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:05.206 11:38:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:05.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:30:05.206 00:30:05.206 --- 10.0.0.2 ping statistics --- 00:30:05.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.206 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:30:05.206 00:30:05.206 --- 10.0.0.1 ping statistics --- 00:30:05.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.206 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=4011257 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 4011257 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 4011257 ']' 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 11:38:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:30:05.206 [2024-06-10 11:38:29.147095] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:30:05.206 [2024-06-10 11:38:29.147161] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:30:05.206 [2024-06-10 11:38:29.282617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.206 [2024-06-10 11:38:29.415527] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.206 [2024-06-10 11:38:29.415584] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.206 [2024-06-10 11:38:29.415603] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.206 [2024-06-10 11:38:29.415620] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.206 [2024-06-10 11:38:29.415633] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.206 [2024-06-10 11:38:29.415761] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:30:05.206 [2024-06-10 11:38:29.415912] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:30:05.206 [2024-06-10 11:38:29.416040] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:30:05.206 [2024-06-10 11:38:29.416042] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 [2024-06-10 11:38:30.095618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 Malloc0 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 [2024-06-10 11:38:30.143966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.206 { 00:30:05.206 "params": { 00:30:05.206 "name": "Nvme$subsystem", 00:30:05.206 "trtype": "$TEST_TRANSPORT", 00:30:05.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.206 "adrfam": "ipv4", 00:30:05.206 "trsvcid": "$NVMF_PORT", 00:30:05.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.206 "hdgst": ${hdgst:-false}, 00:30:05.206 "ddgst": ${ddgst:-false} 00:30:05.206 }, 00:30:05.206 "method": "bdev_nvme_attach_controller" 00:30:05.206 } 00:30:05.206 EOF 00:30:05.206 )") 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:30:05.206 11:38:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:05.206 "params": { 00:30:05.206 "name": "Nvme1", 00:30:05.206 "trtype": "tcp", 00:30:05.206 "traddr": "10.0.0.2", 00:30:05.206 "adrfam": "ipv4", 00:30:05.206 "trsvcid": "4420", 00:30:05.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.206 "hdgst": false, 00:30:05.206 "ddgst": false 00:30:05.206 }, 00:30:05.206 "method": "bdev_nvme_attach_controller" 00:30:05.206 }' 00:30:05.206 [2024-06-10 11:38:30.199778] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:30:05.206 [2024-06-10 11:38:30.199843] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4011441 ] 00:30:05.464 [2024-06-10 11:38:30.326227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:05.464 [2024-06-10 11:38:30.458624] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.464 [2024-06-10 11:38:30.458718] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.464 [2024-06-10 11:38:30.458723] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.721 I/O targets: 00:30:05.721 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:05.721 00:30:05.721 00:30:05.721 CUnit - A unit testing framework for C - Version 2.1-3 00:30:05.721 http://cunit.sourceforge.net/ 00:30:05.721 00:30:05.721 00:30:05.721 Suite: bdevio tests on: Nvme1n1 00:30:05.978 Test: blockdev write read block ...passed 00:30:05.978 Test: blockdev write zeroes read block ...passed 00:30:05.978 Test: blockdev write zeroes read no split ...passed 00:30:05.978 Test: blockdev write zeroes read split ...passed 00:30:05.978 Test: blockdev write zeroes read split partial ...passed 00:30:05.978 Test: blockdev reset ...[2024-06-10 11:38:31.018038] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:05.978 [2024-06-10 11:38:31.018113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13095a0 (9): Bad file descriptor 00:30:05.978 [2024-06-10 11:38:31.072654] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:05.978 passed 00:30:06.235 Test: blockdev write read 8 blocks ...passed 00:30:06.235 Test: blockdev write read size > 128k ...passed 00:30:06.235 Test: blockdev write read invalid size ...passed 00:30:06.235 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:06.235 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:06.235 Test: blockdev write read max offset ...passed 00:30:06.235 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:06.235 Test: blockdev writev readv 8 blocks ...passed 00:30:06.235 Test: blockdev writev readv 30 x 1block ...passed 00:30:06.235 Test: blockdev writev readv block ...passed 00:30:06.235 Test: blockdev writev readv size > 128k ...passed 00:30:06.235 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:06.235 Test: blockdev comparev and writev ...[2024-06-10 11:38:31.293196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:06.235 [2024-06-10 11:38:31.293225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.235 [2024-06-10 11:38:31.293241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:06.235 [2024-06-10 11:38:31.293251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:06.235 [2024-06-10 11:38:31.293604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:06.235 [2024-06-10 11:38:31.293620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:06.235 [2024-06-10 11:38:31.293638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:06.235 [2024-06-10 11:38:31.293648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:06.235 [2024-06-10 11:38:31.293988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:06.235 [2024-06-10 11:38:31.294003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:06.235 [2024-06-10 11:38:31.294020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:06.235 [2024-06-10 11:38:31.294033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:06.235 [2024-06-10 11:38:31.294385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:06.235 [2024-06-10 11:38:31.294399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:06.235 [2024-06-10 11:38:31.294415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:06.235 [2024-06-10 11:38:31.294427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:06.235 passed 00:30:06.493 Test: blockdev nvme passthru rw ...passed 00:30:06.493 Test: blockdev nvme passthru vendor specific ...[2024-06-10 11:38:31.378104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.493 [2024-06-10 11:38:31.378125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:06.493 [2024-06-10 11:38:31.378330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.493 [2024-06-10 11:38:31.378342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:06.493 [2024-06-10 11:38:31.378539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.493 [2024-06-10 11:38:31.378551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:06.493 [2024-06-10 11:38:31.378756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:06.493 [2024-06-10 11:38:31.378768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:06.493 passed 00:30:06.493 Test: blockdev nvme admin passthru ...passed 00:30:06.493 Test: blockdev copy ...passed 00:30:06.493 00:30:06.493 Run Summary: Type Total Ran Passed Failed Inactive 00:30:06.493 suites 1 1 n/a 0 0 00:30:06.493 tests 23 23 23 0 0 00:30:06.493 asserts 152 152 152 0 n/a 00:30:06.493 00:30:06.493 Elapsed time = 1.280 seconds 00:30:06.749 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:06.749 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:06.749 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:06.749 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:06.749 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:06.749 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:30:06.749 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:06.749 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:07.007 rmmod nvme_tcp 00:30:07.007 rmmod nvme_fabrics 00:30:07.007 rmmod nvme_keyring 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 4011257 ']' 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 4011257 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 4011257 ']' 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 4011257 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4011257 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4011257' 00:30:07.007 killing process with pid 4011257 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 4011257 00:30:07.007 11:38:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 4011257 00:30:07.572 11:38:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:07.572 11:38:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:07.572 11:38:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:07.572 11:38:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:07.572 11:38:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:07.572 11:38:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.572 11:38:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:07.573 11:38:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.477 11:38:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:09.477 00:30:09.477 real 0m14.380s 00:30:09.477 user 0m16.746s 00:30:09.477 sys 0m8.278s 00:30:09.477 11:38:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:09.477 11:38:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:30:09.477 ************************************ 00:30:09.477 END TEST nvmf_bdevio_no_huge 00:30:09.477 ************************************ 00:30:09.477 11:38:34 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:30:09.477 11:38:34 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:09.477 11:38:34 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:09.477 11:38:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:09.736 ************************************ 00:30:09.736 START TEST nvmf_tls 00:30:09.736 ************************************ 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:30:09.736 * Looking for test storage... 00:30:09.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.736 11:38:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:09.737 11:38:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.737 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:09.737 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:09.737 11:38:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:30:09.737 11:38:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.857 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:17.858 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:17.858 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:17.858 Found net devices under 0000:af:00.0: cvl_0_0 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:17.858 Found net devices under 0000:af:00.1: cvl_0_1 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:17.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:30:17.858 00:30:17.858 --- 10.0.0.2 ping statistics --- 00:30:17.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.858 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:30:17.858 00:30:17.858 --- 10.0.0.1 ping statistics --- 00:30:17.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.858 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:17.858 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:18.117 11:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:30:18.117 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:18.117 11:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:18.117 11:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:18.117 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4016144 00:30:18.117 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:30:18.117 11:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4016144 00:30:18.117 11:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4016144 ']' 00:30:18.117 11:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.117 11:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:18.117 11:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.117 11:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:18.118 11:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:18.118 [2024-06-10 11:38:43.048080] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:30:18.118 [2024-06-10 11:38:43.048142] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.118 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.118 [2024-06-10 11:38:43.168518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.377 [2024-06-10 11:38:43.252886] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.377 [2024-06-10 11:38:43.252927] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.377 [2024-06-10 11:38:43.252941] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.377 [2024-06-10 11:38:43.252953] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.377 [2024-06-10 11:38:43.252963] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.377 [2024-06-10 11:38:43.252988] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.945 11:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:18.945 11:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:30:18.945 11:38:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:18.945 11:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:18.945 11:38:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:18.945 11:38:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.945 11:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:30:18.945 11:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:30:19.204 true 00:30:19.204 11:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:19.204 11:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:30:19.463 11:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:30:19.463 11:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:30:19.463 11:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:30:19.722 11:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:19.722 11:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:30:19.982 11:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:30:19.982 11:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:30:19.982 11:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:30:20.241 11:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:20.241 11:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:30:20.500 11:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:30:20.500 11:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:30:20.500 11:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:20.500 11:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:30:20.760 11:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:30:20.760 11:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:30:20.760 11:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:30:20.760 11:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:20.760 11:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:30:21.019 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:30:21.019 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:30:21.019 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:30:21.278 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:30:21.278 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:30:21.537 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:30:21.537 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:30:21.537 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:30:21.537 11:38:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.qMFXYtCbeP 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.0c15iJPKou 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.qMFXYtCbeP 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.0c15iJPKou 00:30:21.538 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:30:21.797 11:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:30:22.057 11:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.qMFXYtCbeP 00:30:22.057 11:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.qMFXYtCbeP 00:30:22.057 11:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:22.316 [2024-06-10 11:38:47.360708] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.316 11:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:30:22.576 11:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:30:22.835 [2024-06-10 11:38:47.809893] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:22.835 [2024-06-10 11:38:47.810123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.835 11:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:30:23.094 malloc0 00:30:23.094 11:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:23.353 11:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qMFXYtCbeP 00:30:23.612 [2024-06-10 11:38:48.493085] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:23.612 11:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.qMFXYtCbeP 00:30:23.612 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.637 Initializing NVMe Controllers 00:30:33.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:33.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:33.637 Initialization complete. Launching workers. 00:30:33.637 ======================================================== 00:30:33.637 Latency(us) 00:30:33.637 Device Information : IOPS MiB/s Average min max 00:30:33.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11665.86 45.57 5487.08 1085.35 6426.58 00:30:33.637 ======================================================== 00:30:33.637 Total : 11665.86 45.57 5487.08 1085.35 6426.58 00:30:33.637 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qMFXYtCbeP 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qMFXYtCbeP' 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4018846 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4018846 /var/tmp/bdevperf.sock 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4018846 ']' 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:33.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:33.637 11:38:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:33.637 [2024-06-10 11:38:58.688441] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:30:33.637 [2024-06-10 11:38:58.688507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4018846 ] 00:30:33.637 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.897 [2024-06-10 11:38:58.782985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.897 [2024-06-10 11:38:58.854525] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.833 11:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:34.833 11:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:30:34.833 11:38:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qMFXYtCbeP 00:30:34.833 [2024-06-10 11:38:59.817757] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:34.833 [2024-06-10 11:38:59.817834] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:34.833 TLSTESTn1 00:30:34.833 11:38:59 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:30:35.092 Running I/O for 10 seconds... 00:30:45.065 00:30:45.065 Latency(us) 00:30:45.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.065 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:45.065 Verification LBA range: start 0x0 length 0x2000 00:30:45.065 TLSTESTn1 : 10.04 3796.96 14.83 0.00 0.00 33643.31 6710.89 55364.81 00:30:45.065 =================================================================================================================== 00:30:45.065 Total : 3796.96 14.83 0.00 0.00 33643.31 6710.89 55364.81 00:30:45.065 0 00:30:45.065 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:45.065 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4018846 00:30:45.065 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4018846 ']' 00:30:45.065 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4018846 00:30:45.065 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:30:45.065 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:45.065 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4018846 00:30:45.065 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:30:45.065 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:30:45.065 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4018846' 00:30:45.065 killing process with pid 4018846 00:30:45.065 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4018846 00:30:45.065 Received shutdown signal, test time was about 10.000000 seconds 00:30:45.065 00:30:45.065 Latency(us) 00:30:45.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:45.065 =================================================================================================================== 00:30:45.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:45.065 [2024-06-10 11:39:10.157410] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:45.065 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4018846 00:30:45.323 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0c15iJPKou 00:30:45.323 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:30:45.323 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0c15iJPKou 00:30:45.323 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:30:45.323 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:45.323 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:30:45.323 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:45.323 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0c15iJPKou 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0c15iJPKou' 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4021271 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4021271 /var/tmp/bdevperf.sock 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4021271 ']' 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:45.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:45.324 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:45.324 [2024-06-10 11:39:10.393189] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:30:45.324 [2024-06-10 11:39:10.393254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4021271 ] 00:30:45.582 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.582 [2024-06-10 11:39:10.487357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.582 [2024-06-10 11:39:10.552026] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:45.582 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:45.582 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:30:45.582 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0c15iJPKou 00:30:45.840 [2024-06-10 11:39:10.856140] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:45.840 [2024-06-10 11:39:10.856222] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:45.840 [2024-06-10 11:39:10.861122] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:45.840 [2024-06-10 11:39:10.861782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3420 (107): Transport endpoint is not connected 00:30:45.840 [2024-06-10 11:39:10.862773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3420 (9): Bad file descriptor 00:30:45.840 [2024-06-10 11:39:10.863774] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.840 [2024-06-10 11:39:10.863786] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:30:45.840 [2024-06-10 11:39:10.863803] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.840 request: 00:30:45.840 { 00:30:45.840 "name": "TLSTEST", 00:30:45.840 "trtype": "tcp", 00:30:45.840 "traddr": "10.0.0.2", 00:30:45.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:45.840 "adrfam": "ipv4", 00:30:45.840 "trsvcid": "4420", 00:30:45.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.840 "psk": "/tmp/tmp.0c15iJPKou", 00:30:45.840 "method": "bdev_nvme_attach_controller", 00:30:45.840 "req_id": 1 00:30:45.840 } 00:30:45.840 Got JSON-RPC error response 00:30:45.840 response: 00:30:45.840 { 00:30:45.840 "code": -5, 00:30:45.840 "message": "Input/output error" 00:30:45.840 } 00:30:45.840 11:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4021271 00:30:45.840 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4021271 ']' 00:30:45.840 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4021271 00:30:45.840 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:30:45.840 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:45.840 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4021271 00:30:46.098 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:30:46.098 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:30:46.098 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4021271' 00:30:46.098 killing process with pid 4021271 00:30:46.098 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4021271 00:30:46.098 Received shutdown signal, test time was about 10.000000 seconds 00:30:46.098 00:30:46.098 Latency(us) 00:30:46.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.098 =================================================================================================================== 00:30:46.098 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:46.098 [2024-06-10 11:39:10.955489] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:46.098 11:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4021271 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qMFXYtCbeP 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qMFXYtCbeP 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qMFXYtCbeP 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qMFXYtCbeP' 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4021469 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4021469 /var/tmp/bdevperf.sock 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4021469 ']' 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:46.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:46.098 11:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:46.098 [2024-06-10 11:39:11.178442] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:30:46.098 [2024-06-10 11:39:11.178510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4021469 ] 00:30:46.356 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.356 [2024-06-10 11:39:11.272198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.356 [2024-06-10 11:39:11.343305] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:47.287 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:47.287 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:30:47.287 11:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.qMFXYtCbeP 00:30:47.287 [2024-06-10 11:39:12.310651] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:47.287 [2024-06-10 11:39:12.310728] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:47.287 [2024-06-10 11:39:12.322088] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:30:47.287 [2024-06-10 11:39:12.322118] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:30:47.287 [2024-06-10 11:39:12.322152] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:47.287 [2024-06-10 11:39:12.322979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xace420 (107): Transport endpoint is not connected 00:30:47.287 [2024-06-10 11:39:12.323971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xace420 (9): Bad file descriptor 00:30:47.287 [2024-06-10 11:39:12.324973] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.287 [2024-06-10 11:39:12.324983] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:30:47.287 [2024-06-10 11:39:12.324995] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.287 request: 00:30:47.287 { 00:30:47.287 "name": "TLSTEST", 00:30:47.287 "trtype": "tcp", 00:30:47.287 "traddr": "10.0.0.2", 00:30:47.287 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:47.287 "adrfam": "ipv4", 00:30:47.287 "trsvcid": "4420", 00:30:47.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:47.287 "psk": "/tmp/tmp.qMFXYtCbeP", 00:30:47.287 "method": "bdev_nvme_attach_controller", 00:30:47.287 "req_id": 1 00:30:47.287 } 00:30:47.287 Got JSON-RPC error response 00:30:47.287 response: 00:30:47.287 { 00:30:47.287 "code": -5, 00:30:47.287 "message": "Input/output error" 00:30:47.287 } 00:30:47.287 11:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4021469 00:30:47.287 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4021469 ']' 00:30:47.287 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4021469 00:30:47.287 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:30:47.287 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:47.287 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4021469 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4021469' 00:30:47.545 killing process with pid 4021469 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4021469 00:30:47.545 Received shutdown signal, test time was about 10.000000 seconds 00:30:47.545 00:30:47.545 Latency(us) 00:30:47.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.545 =================================================================================================================== 00:30:47.545 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:47.545 [2024-06-10 11:39:12.402926] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4021469 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qMFXYtCbeP 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qMFXYtCbeP 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qMFXYtCbeP 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qMFXYtCbeP' 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4021681 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4021681 /var/tmp/bdevperf.sock 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4021681 ']' 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:47.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:47.545 11:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:47.545 [2024-06-10 11:39:12.625563] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:30:47.545 [2024-06-10 11:39:12.625637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4021681 ] 00:30:47.804 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.804 [2024-06-10 11:39:12.720187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.804 [2024-06-10 11:39:12.791017] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qMFXYtCbeP 00:30:48.739 [2024-06-10 11:39:13.737009] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:48.739 [2024-06-10 11:39:13.737101] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:48.739 [2024-06-10 11:39:13.745753] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:30:48.739 [2024-06-10 11:39:13.745784] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:30:48.739 [2024-06-10 11:39:13.745819] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:48.739 [2024-06-10 11:39:13.746457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221d420 (107): Transport endpoint is not connected 00:30:48.739 [2024-06-10 11:39:13.747450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x221d420 (9): Bad file descriptor 00:30:48.739 [2024-06-10 11:39:13.748451] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:48.739 [2024-06-10 11:39:13.748462] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:30:48.739 [2024-06-10 11:39:13.748473] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:48.739 request: 00:30:48.739 { 00:30:48.739 "name": "TLSTEST", 00:30:48.739 "trtype": "tcp", 00:30:48.739 "traddr": "10.0.0.2", 00:30:48.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:48.739 "adrfam": "ipv4", 00:30:48.739 "trsvcid": "4420", 00:30:48.739 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:48.739 "psk": "/tmp/tmp.qMFXYtCbeP", 00:30:48.739 "method": "bdev_nvme_attach_controller", 00:30:48.739 "req_id": 1 00:30:48.739 } 00:30:48.739 Got JSON-RPC error response 00:30:48.739 response: 00:30:48.739 { 00:30:48.739 "code": -5, 00:30:48.739 "message": "Input/output error" 00:30:48.739 } 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4021681 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4021681 ']' 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4021681 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4021681 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4021681' 00:30:48.739 killing process with pid 4021681 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4021681 00:30:48.739 Received shutdown signal, test time was about 10.000000 seconds 00:30:48.739 00:30:48.739 Latency(us) 00:30:48.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.739 =================================================================================================================== 00:30:48.739 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:48.739 [2024-06-10 11:39:13.821705] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:48.739 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4021681 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4021859 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4021859 /var/tmp/bdevperf.sock 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4021859 ']' 00:30:48.998 11:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:48.998 11:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:48.998 11:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:48.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:48.998 11:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:48.998 11:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:48.998 [2024-06-10 11:39:14.043844] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:30:48.998 [2024-06-10 11:39:14.043896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4021859 ] 00:30:48.998 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.256 [2024-06-10 11:39:14.123763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.256 [2024-06-10 11:39:14.190018] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.822 11:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:49.822 11:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:30:49.822 11:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:30:50.081 [2024-06-10 11:39:15.059379] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:50.081 [2024-06-10 11:39:15.061264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1fad0 (9): Bad file descriptor 00:30:50.081 [2024-06-10 11:39:15.062261] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:50.081 [2024-06-10 11:39:15.062274] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:30:50.081 [2024-06-10 11:39:15.062285] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:50.081 request: 00:30:50.081 { 00:30:50.082 "name": "TLSTEST", 00:30:50.082 "trtype": "tcp", 00:30:50.082 "traddr": "10.0.0.2", 00:30:50.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.082 "adrfam": "ipv4", 00:30:50.082 "trsvcid": "4420", 00:30:50.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.082 "method": "bdev_nvme_attach_controller", 00:30:50.082 "req_id": 1 00:30:50.082 } 00:30:50.082 Got JSON-RPC error response 00:30:50.082 response: 00:30:50.082 { 00:30:50.082 "code": -5, 00:30:50.082 "message": "Input/output error" 00:30:50.082 } 00:30:50.082 11:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4021859 00:30:50.082 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4021859 ']' 00:30:50.082 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4021859 00:30:50.082 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:30:50.082 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:50.082 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4021859 00:30:50.082 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:30:50.082 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:30:50.082 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4021859' 00:30:50.082 killing process with pid 4021859 00:30:50.082 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4021859 00:30:50.082 Received shutdown signal, test time was about 10.000000 seconds 00:30:50.082 00:30:50.082 Latency(us) 00:30:50.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.082 =================================================================================================================== 00:30:50.082 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:50.082 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4021859 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 4016144 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4016144 ']' 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4016144 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4016144 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4016144' 00:30:50.341 killing process with pid 4016144 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4016144 00:30:50.341 [2024-06-10 11:39:15.359598] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:50.341 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4016144 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.sSztWSNBtx 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.sSztWSNBtx 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4022165 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4022165 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4022165 ']' 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:50.599 11:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:50.599 [2024-06-10 11:39:15.692886] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:30:50.599 [2024-06-10 11:39:15.692937] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.858 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.858 [2024-06-10 11:39:15.794443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.858 [2024-06-10 11:39:15.873361] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.858 [2024-06-10 11:39:15.873407] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.858 [2024-06-10 11:39:15.873420] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.858 [2024-06-10 11:39:15.873433] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.858 [2024-06-10 11:39:15.873447] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.858 [2024-06-10 11:39:15.873474] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.792 11:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:51.792 11:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:30:51.792 11:39:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:51.792 11:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:51.792 11:39:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:51.792 11:39:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.792 11:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.sSztWSNBtx 00:30:51.792 11:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sSztWSNBtx 00:30:51.792 11:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:51.792 [2024-06-10 11:39:16.853462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.792 11:39:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:30:52.050 11:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:30:52.308 [2024-06-10 11:39:17.318703] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:52.308 [2024-06-10 11:39:17.318951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.308 11:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:30:52.567 malloc0 00:30:52.567 11:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:52.825 11:39:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sSztWSNBtx 00:30:53.083 [2024-06-10 11:39:18.046032] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sSztWSNBtx 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sSztWSNBtx' 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4022669 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4022669 /var/tmp/bdevperf.sock 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4022669 ']' 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:53.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:53.083 11:39:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:30:53.084 [2024-06-10 11:39:18.117079] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:30:53.084 [2024-06-10 11:39:18.117144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4022669 ] 00:30:53.084 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.343 [2024-06-10 11:39:18.211734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.343 [2024-06-10 11:39:18.280327] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:54.277 11:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:54.277 11:39:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:30:54.277 11:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sSztWSNBtx 00:30:54.277 [2024-06-10 11:39:19.242829] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:54.277 [2024-06-10 11:39:19.242908] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:54.277 TLSTESTn1 00:30:54.277 11:39:19 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:30:54.535 Running I/O for 10 seconds... 00:31:04.505 00:31:04.505 Latency(us) 00:31:04.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.505 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:04.505 Verification LBA range: start 0x0 length 0x2000 00:31:04.505 TLSTESTn1 : 10.04 3700.93 14.46 0.00 0.00 34510.07 4692.38 65431.14 00:31:04.505 =================================================================================================================== 00:31:04.505 Total : 3700.93 14.46 0.00 0.00 34510.07 4692.38 65431.14 00:31:04.505 0 00:31:04.505 11:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:04.505 11:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4022669 00:31:04.505 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4022669 ']' 00:31:04.505 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4022669 00:31:04.505 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:04.505 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:04.505 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4022669 00:31:04.505 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:31:04.505 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:31:04.505 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4022669' 00:31:04.505 killing process with pid 4022669 00:31:04.505 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4022669 00:31:04.505 Received shutdown signal, test time was about 10.000000 seconds 00:31:04.505 00:31:04.505 Latency(us) 00:31:04.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.505 =================================================================================================================== 00:31:04.506 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:04.506 [2024-06-10 11:39:29.595877] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:04.506 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4022669 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.sSztWSNBtx 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sSztWSNBtx 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sSztWSNBtx 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sSztWSNBtx 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sSztWSNBtx' 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4024533 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4024533 /var/tmp/bdevperf.sock 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4024533 ']' 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:04.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:04.764 11:39:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:04.764 [2024-06-10 11:39:29.832981] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:04.764 [2024-06-10 11:39:29.833048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4024533 ] 00:31:05.022 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.022 [2024-06-10 11:39:29.927293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.022 [2024-06-10 11:39:29.992959] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:05.630 11:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:05.630 11:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:31:05.630 11:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sSztWSNBtx 00:31:05.926 [2024-06-10 11:39:30.839128] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:05.926 [2024-06-10 11:39:30.839184] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:31:05.926 [2024-06-10 11:39:30.839193] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.sSztWSNBtx 00:31:05.926 request: 00:31:05.926 { 00:31:05.926 "name": "TLSTEST", 00:31:05.926 "trtype": "tcp", 00:31:05.926 "traddr": "10.0.0.2", 00:31:05.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:05.926 "adrfam": "ipv4", 00:31:05.926 "trsvcid": "4420", 00:31:05.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.926 "psk": "/tmp/tmp.sSztWSNBtx", 00:31:05.926 "method": "bdev_nvme_attach_controller", 00:31:05.926 "req_id": 1 00:31:05.926 } 00:31:05.926 Got JSON-RPC error response 00:31:05.926 response: 00:31:05.926 { 00:31:05.926 "code": -1, 00:31:05.926 "message": "Operation not permitted" 00:31:05.926 } 00:31:05.926 11:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4024533 00:31:05.926 11:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4024533 ']' 00:31:05.926 11:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4024533 00:31:05.926 11:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:05.926 11:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:05.926 11:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4024533 00:31:05.926 11:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:31:05.926 11:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:31:05.926 11:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4024533' 00:31:05.926 killing process with pid 4024533 00:31:05.926 11:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4024533 00:31:05.926 Received shutdown signal, test time was about 10.000000 seconds 00:31:05.926 00:31:05.926 Latency(us) 00:31:05.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.926 =================================================================================================================== 00:31:05.926 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:05.926 11:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4024533 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 4022165 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4022165 ']' 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4022165 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4022165 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4022165' 00:31:06.185 killing process with pid 4022165 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4022165 00:31:06.185 [2024-06-10 11:39:31.143901] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:06.185 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4022165 00:31:06.444 11:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:31:06.444 11:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:06.444 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:06.444 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:06.444 11:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4024817 00:31:06.444 11:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:06.444 11:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4024817 00:31:06.444 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4024817 ']' 00:31:06.444 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.444 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:06.444 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.444 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:06.444 11:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:06.444 [2024-06-10 11:39:31.417530] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:06.444 [2024-06-10 11:39:31.417600] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.444 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.444 [2024-06-10 11:39:31.533642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.702 [2024-06-10 11:39:31.617084] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.702 [2024-06-10 11:39:31.617126] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.702 [2024-06-10 11:39:31.617140] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:06.702 [2024-06-10 11:39:31.617152] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:06.702 [2024-06-10 11:39:31.617162] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.702 [2024-06-10 11:39:31.617188] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.sSztWSNBtx 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.sSztWSNBtx 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.sSztWSNBtx 00:31:07.267 11:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sSztWSNBtx 00:31:07.268 11:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:07.525 [2024-06-10 11:39:32.576880] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.525 11:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:07.783 11:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:08.041 [2024-06-10 11:39:33.026071] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:08.041 [2024-06-10 11:39:33.026305] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.041 11:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:08.300 malloc0 00:31:08.300 11:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:08.558 11:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sSztWSNBtx 00:31:08.817 [2024-06-10 11:39:33.705209] tcp.c:3580:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:31:08.817 [2024-06-10 11:39:33.705244] tcp.c:3666:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:31:08.817 [2024-06-10 11:39:33.705279] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:31:08.817 request: 00:31:08.817 { 00:31:08.817 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:08.817 "host": "nqn.2016-06.io.spdk:host1", 00:31:08.817 "psk": "/tmp/tmp.sSztWSNBtx", 00:31:08.817 "method": "nvmf_subsystem_add_host", 00:31:08.817 "req_id": 1 00:31:08.817 } 00:31:08.817 Got JSON-RPC error response 00:31:08.817 response: 00:31:08.817 { 00:31:08.817 "code": -32603, 00:31:08.817 "message": "Internal error" 00:31:08.817 } 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 4024817 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4024817 ']' 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4024817 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4024817 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4024817' 00:31:08.817 killing process with pid 4024817 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4024817 00:31:08.817 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4024817 00:31:09.076 11:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.sSztWSNBtx 00:31:09.076 11:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:31:09.076 11:39:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:09.076 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:09.076 11:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:09.076 11:39:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4025377 00:31:09.076 11:39:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:09.076 11:39:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4025377 00:31:09.076 11:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4025377 ']' 00:31:09.076 11:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.076 11:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:09.077 11:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.077 11:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:09.077 11:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:09.077 [2024-06-10 11:39:34.055401] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:09.077 [2024-06-10 11:39:34.055466] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.077 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.077 [2024-06-10 11:39:34.156212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.335 [2024-06-10 11:39:34.238524] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.335 [2024-06-10 11:39:34.238570] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.335 [2024-06-10 11:39:34.238589] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.335 [2024-06-10 11:39:34.238602] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.335 [2024-06-10 11:39:34.238612] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.335 [2024-06-10 11:39:34.238639] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.901 11:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:09.901 11:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:31:09.901 11:39:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:09.901 11:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:09.901 11:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:10.159 11:39:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.159 11:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.sSztWSNBtx 00:31:10.159 11:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sSztWSNBtx 00:31:10.159 11:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:10.159 [2024-06-10 11:39:35.163129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.159 11:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:10.417 11:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:10.675 [2024-06-10 11:39:35.556164] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:10.675 [2024-06-10 11:39:35.556409] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.675 11:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:10.675 malloc0 00:31:10.675 11:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:10.934 11:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sSztWSNBtx 00:31:11.192 [2024-06-10 11:39:36.054823] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:11.192 11:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:11.192 11:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=4025670 00:31:11.192 11:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:11.192 11:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 4025670 /var/tmp/bdevperf.sock 00:31:11.192 11:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4025670 ']' 00:31:11.192 11:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:11.192 11:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:11.192 11:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:11.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:11.192 11:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:11.192 11:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:11.192 [2024-06-10 11:39:36.119511] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:11.192 [2024-06-10 11:39:36.119560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4025670 ] 00:31:11.192 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.192 [2024-06-10 11:39:36.197868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.192 [2024-06-10 11:39:36.269776] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.127 11:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:12.127 11:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:31:12.127 11:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sSztWSNBtx 00:31:12.127 [2024-06-10 11:39:37.181354] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:12.127 [2024-06-10 11:39:37.181434] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:12.385 TLSTESTn1 00:31:12.385 11:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:31:12.644 11:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:31:12.644 "subsystems": [ 00:31:12.644 { 00:31:12.644 "subsystem": "keyring", 00:31:12.644 "config": [] 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "subsystem": "iobuf", 00:31:12.644 "config": [ 00:31:12.644 { 00:31:12.644 "method": "iobuf_set_options", 00:31:12.644 "params": { 00:31:12.644 "small_pool_count": 8192, 00:31:12.644 "large_pool_count": 1024, 00:31:12.644 "small_bufsize": 8192, 00:31:12.644 "large_bufsize": 135168 00:31:12.644 } 00:31:12.644 } 00:31:12.644 ] 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "subsystem": "sock", 00:31:12.644 "config": [ 00:31:12.644 { 00:31:12.644 "method": "sock_set_default_impl", 00:31:12.644 "params": { 00:31:12.644 "impl_name": "posix" 00:31:12.644 } 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "method": "sock_impl_set_options", 00:31:12.644 "params": { 00:31:12.644 "impl_name": "ssl", 00:31:12.644 "recv_buf_size": 4096, 00:31:12.644 "send_buf_size": 4096, 00:31:12.644 "enable_recv_pipe": true, 00:31:12.644 "enable_quickack": false, 00:31:12.644 "enable_placement_id": 0, 00:31:12.644 "enable_zerocopy_send_server": true, 00:31:12.644 "enable_zerocopy_send_client": false, 00:31:12.644 "zerocopy_threshold": 0, 00:31:12.644 "tls_version": 0, 00:31:12.644 "enable_ktls": false 00:31:12.644 } 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "method": "sock_impl_set_options", 00:31:12.644 "params": { 00:31:12.644 "impl_name": "posix", 00:31:12.644 "recv_buf_size": 2097152, 00:31:12.644 "send_buf_size": 2097152, 00:31:12.644 "enable_recv_pipe": true, 00:31:12.644 "enable_quickack": false, 00:31:12.644 "enable_placement_id": 0, 00:31:12.644 "enable_zerocopy_send_server": true, 00:31:12.644 "enable_zerocopy_send_client": false, 00:31:12.644 "zerocopy_threshold": 0, 00:31:12.644 "tls_version": 0, 00:31:12.644 "enable_ktls": false 00:31:12.644 } 00:31:12.644 } 00:31:12.644 ] 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "subsystem": "vmd", 00:31:12.644 "config": [] 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "subsystem": "accel", 00:31:12.644 "config": [ 00:31:12.644 { 00:31:12.644 "method": "accel_set_options", 00:31:12.644 "params": { 00:31:12.644 "small_cache_size": 128, 00:31:12.644 "large_cache_size": 16, 00:31:12.644 "task_count": 2048, 00:31:12.644 "sequence_count": 2048, 00:31:12.644 "buf_count": 2048 00:31:12.644 } 00:31:12.644 } 00:31:12.644 ] 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "subsystem": "bdev", 00:31:12.644 "config": [ 00:31:12.644 { 00:31:12.644 "method": "bdev_set_options", 00:31:12.644 "params": { 00:31:12.644 "bdev_io_pool_size": 65535, 00:31:12.644 "bdev_io_cache_size": 256, 00:31:12.644 "bdev_auto_examine": true, 00:31:12.644 "iobuf_small_cache_size": 128, 00:31:12.644 "iobuf_large_cache_size": 16 00:31:12.644 } 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "method": "bdev_raid_set_options", 00:31:12.644 "params": { 00:31:12.644 "process_window_size_kb": 1024 00:31:12.644 } 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "method": "bdev_iscsi_set_options", 00:31:12.644 "params": { 00:31:12.644 "timeout_sec": 30 00:31:12.644 } 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "method": "bdev_nvme_set_options", 00:31:12.644 "params": { 00:31:12.644 "action_on_timeout": "none", 00:31:12.644 "timeout_us": 0, 00:31:12.644 "timeout_admin_us": 0, 00:31:12.644 "keep_alive_timeout_ms": 10000, 00:31:12.644 "arbitration_burst": 0, 00:31:12.644 "low_priority_weight": 0, 00:31:12.644 "medium_priority_weight": 0, 00:31:12.644 "high_priority_weight": 0, 00:31:12.644 "nvme_adminq_poll_period_us": 10000, 00:31:12.644 "nvme_ioq_poll_period_us": 0, 00:31:12.644 "io_queue_requests": 0, 00:31:12.644 "delay_cmd_submit": true, 00:31:12.644 "transport_retry_count": 4, 00:31:12.644 "bdev_retry_count": 3, 00:31:12.644 "transport_ack_timeout": 0, 00:31:12.644 "ctrlr_loss_timeout_sec": 0, 00:31:12.644 "reconnect_delay_sec": 0, 00:31:12.644 "fast_io_fail_timeout_sec": 0, 00:31:12.644 "disable_auto_failback": false, 00:31:12.644 "generate_uuids": false, 00:31:12.644 "transport_tos": 0, 00:31:12.644 "nvme_error_stat": false, 00:31:12.644 "rdma_srq_size": 0, 00:31:12.644 "io_path_stat": false, 00:31:12.644 "allow_accel_sequence": false, 00:31:12.644 "rdma_max_cq_size": 0, 00:31:12.644 "rdma_cm_event_timeout_ms": 0, 00:31:12.644 "dhchap_digests": [ 00:31:12.644 "sha256", 00:31:12.644 "sha384", 00:31:12.644 "sha512" 00:31:12.644 ], 00:31:12.644 "dhchap_dhgroups": [ 00:31:12.644 "null", 00:31:12.644 "ffdhe2048", 00:31:12.644 "ffdhe3072", 00:31:12.644 "ffdhe4096", 00:31:12.644 "ffdhe6144", 00:31:12.644 "ffdhe8192" 00:31:12.644 ] 00:31:12.644 } 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "method": "bdev_nvme_set_hotplug", 00:31:12.644 "params": { 00:31:12.644 "period_us": 100000, 00:31:12.644 "enable": false 00:31:12.644 } 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "method": "bdev_malloc_create", 00:31:12.644 "params": { 00:31:12.644 "name": "malloc0", 00:31:12.644 "num_blocks": 8192, 00:31:12.644 "block_size": 4096, 00:31:12.644 "physical_block_size": 4096, 00:31:12.644 "uuid": "d49b46b5-8207-475e-a47a-111908a77ff3", 00:31:12.644 "optimal_io_boundary": 0 00:31:12.644 } 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "method": "bdev_wait_for_examine" 00:31:12.644 } 00:31:12.644 ] 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "subsystem": "nbd", 00:31:12.644 "config": [] 00:31:12.644 }, 00:31:12.644 { 00:31:12.644 "subsystem": "scheduler", 00:31:12.644 "config": [ 00:31:12.645 { 00:31:12.645 "method": "framework_set_scheduler", 00:31:12.645 "params": { 00:31:12.645 "name": "static" 00:31:12.645 } 00:31:12.645 } 00:31:12.645 ] 00:31:12.645 }, 00:31:12.645 { 00:31:12.645 "subsystem": "nvmf", 00:31:12.645 "config": [ 00:31:12.645 { 00:31:12.645 "method": "nvmf_set_config", 00:31:12.645 "params": { 00:31:12.645 "discovery_filter": "match_any", 00:31:12.645 "admin_cmd_passthru": { 00:31:12.645 "identify_ctrlr": false 00:31:12.645 } 00:31:12.645 } 00:31:12.645 }, 00:31:12.645 { 00:31:12.645 "method": "nvmf_set_max_subsystems", 00:31:12.645 "params": { 00:31:12.645 "max_subsystems": 1024 00:31:12.645 } 00:31:12.645 }, 00:31:12.645 { 00:31:12.645 "method": "nvmf_set_crdt", 00:31:12.645 "params": { 00:31:12.645 "crdt1": 0, 00:31:12.645 "crdt2": 0, 00:31:12.645 "crdt3": 0 00:31:12.645 } 00:31:12.645 }, 00:31:12.645 { 00:31:12.645 "method": "nvmf_create_transport", 00:31:12.645 "params": { 00:31:12.645 "trtype": "TCP", 00:31:12.645 "max_queue_depth": 128, 00:31:12.645 "max_io_qpairs_per_ctrlr": 127, 00:31:12.645 "in_capsule_data_size": 4096, 00:31:12.645 "max_io_size": 131072, 00:31:12.645 "io_unit_size": 131072, 00:31:12.645 "max_aq_depth": 128, 00:31:12.645 "num_shared_buffers": 511, 00:31:12.645 "buf_cache_size": 4294967295, 00:31:12.645 "dif_insert_or_strip": false, 00:31:12.645 "zcopy": false, 00:31:12.645 "c2h_success": false, 00:31:12.645 "sock_priority": 0, 00:31:12.645 "abort_timeout_sec": 1, 00:31:12.645 "ack_timeout": 0, 00:31:12.645 "data_wr_pool_size": 0 00:31:12.645 } 00:31:12.645 }, 00:31:12.645 { 00:31:12.645 "method": "nvmf_create_subsystem", 00:31:12.645 "params": { 00:31:12.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:12.645 "allow_any_host": false, 00:31:12.645 "serial_number": "SPDK00000000000001", 00:31:12.645 "model_number": "SPDK bdev Controller", 00:31:12.645 "max_namespaces": 10, 00:31:12.645 "min_cntlid": 1, 00:31:12.645 "max_cntlid": 65519, 00:31:12.645 "ana_reporting": false 00:31:12.645 } 00:31:12.645 }, 00:31:12.645 { 00:31:12.645 "method": "nvmf_subsystem_add_host", 00:31:12.645 "params": { 00:31:12.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:12.645 "host": "nqn.2016-06.io.spdk:host1", 00:31:12.645 "psk": "/tmp/tmp.sSztWSNBtx" 00:31:12.645 } 00:31:12.645 }, 00:31:12.645 { 00:31:12.645 "method": "nvmf_subsystem_add_ns", 00:31:12.645 "params": { 00:31:12.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:12.645 "namespace": { 00:31:12.645 "nsid": 1, 00:31:12.645 "bdev_name": "malloc0", 00:31:12.645 "nguid": "D49B46B58207475EA47A111908A77FF3", 00:31:12.645 "uuid": "d49b46b5-8207-475e-a47a-111908a77ff3", 00:31:12.645 "no_auto_visible": false 00:31:12.645 } 00:31:12.645 } 00:31:12.645 }, 00:31:12.645 { 00:31:12.645 "method": "nvmf_subsystem_add_listener", 00:31:12.645 "params": { 00:31:12.645 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:12.645 "listen_address": { 00:31:12.645 "trtype": "TCP", 00:31:12.645 "adrfam": "IPv4", 00:31:12.645 "traddr": "10.0.0.2", 00:31:12.645 "trsvcid": "4420" 00:31:12.645 }, 00:31:12.645 "secure_channel": true 00:31:12.645 } 00:31:12.645 } 00:31:12.645 ] 00:31:12.645 } 00:31:12.645 ] 00:31:12.645 }' 00:31:12.645 11:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:31:12.904 11:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:31:12.904 "subsystems": [ 00:31:12.904 { 00:31:12.904 "subsystem": "keyring", 00:31:12.904 "config": [] 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "subsystem": "iobuf", 00:31:12.904 "config": [ 00:31:12.904 { 00:31:12.904 "method": "iobuf_set_options", 00:31:12.904 "params": { 00:31:12.904 "small_pool_count": 8192, 00:31:12.904 "large_pool_count": 1024, 00:31:12.904 "small_bufsize": 8192, 00:31:12.904 "large_bufsize": 135168 00:31:12.904 } 00:31:12.904 } 00:31:12.904 ] 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "subsystem": "sock", 00:31:12.904 "config": [ 00:31:12.904 { 00:31:12.904 "method": "sock_set_default_impl", 00:31:12.904 "params": { 00:31:12.904 "impl_name": "posix" 00:31:12.904 } 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "method": "sock_impl_set_options", 00:31:12.904 "params": { 00:31:12.904 "impl_name": "ssl", 00:31:12.904 "recv_buf_size": 4096, 00:31:12.904 "send_buf_size": 4096, 00:31:12.904 "enable_recv_pipe": true, 00:31:12.904 "enable_quickack": false, 00:31:12.904 "enable_placement_id": 0, 00:31:12.904 "enable_zerocopy_send_server": true, 00:31:12.904 "enable_zerocopy_send_client": false, 00:31:12.904 "zerocopy_threshold": 0, 00:31:12.904 "tls_version": 0, 00:31:12.904 "enable_ktls": false 00:31:12.904 } 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "method": "sock_impl_set_options", 00:31:12.904 "params": { 00:31:12.904 "impl_name": "posix", 00:31:12.904 "recv_buf_size": 2097152, 00:31:12.904 "send_buf_size": 2097152, 00:31:12.904 "enable_recv_pipe": true, 00:31:12.904 "enable_quickack": false, 00:31:12.904 "enable_placement_id": 0, 00:31:12.904 "enable_zerocopy_send_server": true, 00:31:12.904 "enable_zerocopy_send_client": false, 00:31:12.904 "zerocopy_threshold": 0, 00:31:12.904 "tls_version": 0, 00:31:12.904 "enable_ktls": false 00:31:12.904 } 00:31:12.904 } 00:31:12.904 ] 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "subsystem": "vmd", 00:31:12.904 "config": [] 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "subsystem": "accel", 00:31:12.904 "config": [ 00:31:12.904 { 00:31:12.904 "method": "accel_set_options", 00:31:12.904 "params": { 00:31:12.904 "small_cache_size": 128, 00:31:12.904 "large_cache_size": 16, 00:31:12.904 "task_count": 2048, 00:31:12.904 "sequence_count": 2048, 00:31:12.904 "buf_count": 2048 00:31:12.904 } 00:31:12.904 } 00:31:12.904 ] 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "subsystem": "bdev", 00:31:12.904 "config": [ 00:31:12.904 { 00:31:12.904 "method": "bdev_set_options", 00:31:12.904 "params": { 00:31:12.904 "bdev_io_pool_size": 65535, 00:31:12.904 "bdev_io_cache_size": 256, 00:31:12.904 "bdev_auto_examine": true, 00:31:12.904 "iobuf_small_cache_size": 128, 00:31:12.904 "iobuf_large_cache_size": 16 00:31:12.904 } 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "method": "bdev_raid_set_options", 00:31:12.904 "params": { 00:31:12.904 "process_window_size_kb": 1024 00:31:12.904 } 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "method": "bdev_iscsi_set_options", 00:31:12.904 "params": { 00:31:12.904 "timeout_sec": 30 00:31:12.904 } 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "method": "bdev_nvme_set_options", 00:31:12.904 "params": { 00:31:12.904 "action_on_timeout": "none", 00:31:12.904 "timeout_us": 0, 00:31:12.904 "timeout_admin_us": 0, 00:31:12.904 "keep_alive_timeout_ms": 10000, 00:31:12.904 "arbitration_burst": 0, 00:31:12.904 "low_priority_weight": 0, 00:31:12.904 "medium_priority_weight": 0, 00:31:12.904 "high_priority_weight": 0, 00:31:12.904 "nvme_adminq_poll_period_us": 10000, 00:31:12.904 "nvme_ioq_poll_period_us": 0, 00:31:12.904 "io_queue_requests": 512, 00:31:12.904 "delay_cmd_submit": true, 00:31:12.904 "transport_retry_count": 4, 00:31:12.904 "bdev_retry_count": 3, 00:31:12.904 "transport_ack_timeout": 0, 00:31:12.904 "ctrlr_loss_timeout_sec": 0, 00:31:12.904 "reconnect_delay_sec": 0, 00:31:12.904 "fast_io_fail_timeout_sec": 0, 00:31:12.904 "disable_auto_failback": false, 00:31:12.904 "generate_uuids": false, 00:31:12.904 "transport_tos": 0, 00:31:12.904 "nvme_error_stat": false, 00:31:12.904 "rdma_srq_size": 0, 00:31:12.904 "io_path_stat": false, 00:31:12.904 "allow_accel_sequence": false, 00:31:12.904 "rdma_max_cq_size": 0, 00:31:12.904 "rdma_cm_event_timeout_ms": 0, 00:31:12.904 "dhchap_digests": [ 00:31:12.904 "sha256", 00:31:12.904 "sha384", 00:31:12.904 "sha512" 00:31:12.904 ], 00:31:12.904 "dhchap_dhgroups": [ 00:31:12.904 "null", 00:31:12.904 "ffdhe2048", 00:31:12.904 "ffdhe3072", 00:31:12.904 "ffdhe4096", 00:31:12.904 "ffdhe6144", 00:31:12.904 "ffdhe8192" 00:31:12.904 ] 00:31:12.904 } 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "method": "bdev_nvme_attach_controller", 00:31:12.904 "params": { 00:31:12.904 "name": "TLSTEST", 00:31:12.904 "trtype": "TCP", 00:31:12.904 "adrfam": "IPv4", 00:31:12.904 "traddr": "10.0.0.2", 00:31:12.904 "trsvcid": "4420", 00:31:12.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:12.904 "prchk_reftag": false, 00:31:12.904 "prchk_guard": false, 00:31:12.904 "ctrlr_loss_timeout_sec": 0, 00:31:12.904 "reconnect_delay_sec": 0, 00:31:12.904 "fast_io_fail_timeout_sec": 0, 00:31:12.904 "psk": "/tmp/tmp.sSztWSNBtx", 00:31:12.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:12.904 "hdgst": false, 00:31:12.904 "ddgst": false 00:31:12.904 } 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "method": "bdev_nvme_set_hotplug", 00:31:12.904 "params": { 00:31:12.904 "period_us": 100000, 00:31:12.904 "enable": false 00:31:12.904 } 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "method": "bdev_wait_for_examine" 00:31:12.904 } 00:31:12.904 ] 00:31:12.904 }, 00:31:12.904 { 00:31:12.904 "subsystem": "nbd", 00:31:12.904 "config": [] 00:31:12.904 } 00:31:12.904 ] 00:31:12.904 }' 00:31:12.904 11:39:37 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 4025670 00:31:12.904 11:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4025670 ']' 00:31:12.904 11:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4025670 00:31:12.904 11:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:12.904 11:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:12.905 11:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4025670 00:31:12.905 11:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:31:12.905 11:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:31:12.905 11:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4025670' 00:31:12.905 killing process with pid 4025670 00:31:12.905 11:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4025670 00:31:12.905 Received shutdown signal, test time was about 10.000000 seconds 00:31:12.905 00:31:12.905 Latency(us) 00:31:12.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.905 =================================================================================================================== 00:31:12.905 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:12.905 [2024-06-10 11:39:37.825739] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:12.905 11:39:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4025670 00:31:12.905 11:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 4025377 00:31:12.905 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4025377 ']' 00:31:12.905 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4025377 00:31:12.905 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:13.163 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:13.163 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4025377 00:31:13.163 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:13.163 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:13.163 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4025377' 00:31:13.163 killing process with pid 4025377 00:31:13.163 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4025377 00:31:13.163 [2024-06-10 11:39:38.059516] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:13.163 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4025377 00:31:13.422 11:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:31:13.422 11:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:13.422 11:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:31:13.422 "subsystems": [ 00:31:13.422 { 00:31:13.422 "subsystem": "keyring", 00:31:13.422 "config": [] 00:31:13.422 }, 00:31:13.422 { 00:31:13.422 "subsystem": "iobuf", 00:31:13.422 "config": [ 00:31:13.422 { 00:31:13.422 "method": "iobuf_set_options", 00:31:13.422 "params": { 00:31:13.422 "small_pool_count": 8192, 00:31:13.422 "large_pool_count": 1024, 00:31:13.422 "small_bufsize": 8192, 00:31:13.422 "large_bufsize": 135168 00:31:13.422 } 00:31:13.422 } 00:31:13.422 ] 00:31:13.422 }, 00:31:13.422 { 00:31:13.422 "subsystem": "sock", 00:31:13.422 "config": [ 00:31:13.422 { 00:31:13.422 "method": "sock_set_default_impl", 00:31:13.422 "params": { 00:31:13.422 "impl_name": "posix" 00:31:13.422 } 00:31:13.422 }, 00:31:13.422 { 00:31:13.422 "method": "sock_impl_set_options", 00:31:13.422 "params": { 00:31:13.422 "impl_name": "ssl", 00:31:13.422 "recv_buf_size": 4096, 00:31:13.422 "send_buf_size": 4096, 00:31:13.422 "enable_recv_pipe": true, 00:31:13.422 "enable_quickack": false, 00:31:13.422 "enable_placement_id": 0, 00:31:13.422 "enable_zerocopy_send_server": true, 00:31:13.422 "enable_zerocopy_send_client": false, 00:31:13.422 "zerocopy_threshold": 0, 00:31:13.422 "tls_version": 0, 00:31:13.422 "enable_ktls": false 00:31:13.422 } 00:31:13.422 }, 00:31:13.422 { 00:31:13.422 "method": "sock_impl_set_options", 00:31:13.422 "params": { 00:31:13.422 "impl_name": "posix", 00:31:13.422 "recv_buf_size": 2097152, 00:31:13.422 "send_buf_size": 2097152, 00:31:13.422 "enable_recv_pipe": true, 00:31:13.422 "enable_quickack": false, 00:31:13.422 "enable_placement_id": 0, 00:31:13.422 "enable_zerocopy_send_server": true, 00:31:13.422 "enable_zerocopy_send_client": false, 00:31:13.422 "zerocopy_threshold": 0, 00:31:13.422 "tls_version": 0, 00:31:13.422 "enable_ktls": false 00:31:13.422 } 00:31:13.422 } 00:31:13.422 ] 00:31:13.422 }, 00:31:13.422 { 00:31:13.422 "subsystem": "vmd", 00:31:13.422 "config": [] 00:31:13.422 }, 00:31:13.422 { 00:31:13.422 "subsystem": "accel", 00:31:13.422 "config": [ 00:31:13.422 { 00:31:13.422 "method": "accel_set_options", 00:31:13.422 "params": { 00:31:13.422 "small_cache_size": 128, 00:31:13.422 "large_cache_size": 16, 00:31:13.422 "task_count": 2048, 00:31:13.422 "sequence_count": 2048, 00:31:13.422 "buf_count": 2048 00:31:13.422 } 00:31:13.422 } 00:31:13.422 ] 00:31:13.422 }, 00:31:13.422 { 00:31:13.422 "subsystem": "bdev", 00:31:13.422 "config": [ 00:31:13.422 { 00:31:13.422 "method": "bdev_set_options", 00:31:13.423 "params": { 00:31:13.423 "bdev_io_pool_size": 65535, 00:31:13.423 "bdev_io_cache_size": 256, 00:31:13.423 "bdev_auto_examine": true, 00:31:13.423 "iobuf_small_cache_size": 128, 00:31:13.423 "iobuf_large_cache_size": 16 00:31:13.423 } 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "method": "bdev_raid_set_options", 00:31:13.423 "params": { 00:31:13.423 "process_window_size_kb": 1024 00:31:13.423 } 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "method": "bdev_iscsi_set_options", 00:31:13.423 "params": { 00:31:13.423 "timeout_sec": 30 00:31:13.423 } 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "method": "bdev_nvme_set_options", 00:31:13.423 "params": { 00:31:13.423 "action_on_timeout": "none", 00:31:13.423 "timeout_us": 0, 00:31:13.423 "timeout_admin_us": 0, 00:31:13.423 "keep_alive_timeout_ms": 10000, 00:31:13.423 "arbitration_burst": 0, 00:31:13.423 "low_priority_weight": 0, 00:31:13.423 "medium_priority_weight": 0, 00:31:13.423 "high_priority_weight": 0, 00:31:13.423 "nvme_adminq_poll_period_us": 10000, 00:31:13.423 "nvme_ioq_poll_period_us": 0, 00:31:13.423 "io_queue_requests": 0, 00:31:13.423 "delay_cmd_submit": true, 00:31:13.423 "transport_retry_count": 4, 00:31:13.423 "bdev_retry_count": 3, 00:31:13.423 "transport_ack_timeout": 0, 00:31:13.423 "ctrlr_loss_timeout_sec": 0, 00:31:13.423 "reconnect_delay_sec": 0, 00:31:13.423 "fast_io_fail_timeout_sec": 0, 00:31:13.423 "disable_auto_failback": false, 00:31:13.423 "generate_uuids": false, 00:31:13.423 "transport_tos": 0, 00:31:13.423 "nvme_error_stat": false, 00:31:13.423 "rdma_srq_size": 0, 00:31:13.423 "io_path_stat": false, 00:31:13.423 "allow_accel_sequence": false, 00:31:13.423 "rdma_max_cq_size": 0, 00:31:13.423 "rdma_cm_event_timeout_ms": 0, 00:31:13.423 "dhchap_digests": [ 00:31:13.423 "sha256", 00:31:13.423 "sha384", 00:31:13.423 "sha512" 00:31:13.423 ], 00:31:13.423 "dhchap_dhgroups": [ 00:31:13.423 "null", 00:31:13.423 "ffdhe2048", 00:31:13.423 "ffdhe3072", 00:31:13.423 "ffdhe4096", 00:31:13.423 "ffdhe6144", 00:31:13.423 "ffdhe8192" 00:31:13.423 ] 00:31:13.423 } 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "method": "bdev_nvme_set_hotplug", 00:31:13.423 "params": { 00:31:13.423 "period_us": 100000, 00:31:13.423 "enable": false 00:31:13.423 } 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "method": "bdev_malloc_create", 00:31:13.423 "params": { 00:31:13.423 "name": "malloc0", 00:31:13.423 "num_blocks": 8192, 00:31:13.423 "block_size": 4096, 00:31:13.423 "physical_block_size": 4096, 00:31:13.423 "uuid": "d49b46b5-8207-475e-a47a-111908a77ff3", 00:31:13.423 "optimal_io_boundary": 0 00:31:13.423 } 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "method": "bdev_wait_for_examine" 00:31:13.423 } 00:31:13.423 ] 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "subsystem": "nbd", 00:31:13.423 "config": [] 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "subsystem": "scheduler", 00:31:13.423 "config": [ 00:31:13.423 { 00:31:13.423 "method": "framework_set_scheduler", 00:31:13.423 "params": { 00:31:13.423 "name": "static" 00:31:13.423 } 00:31:13.423 } 00:31:13.423 ] 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "subsystem": "nvmf", 00:31:13.423 "config": [ 00:31:13.423 { 00:31:13.423 "method": "nvmf_set_config", 00:31:13.423 "params": { 00:31:13.423 "discovery_filter": "match_any", 00:31:13.423 "admin_cmd_passthru": { 00:31:13.423 "identify_ctrlr": false 00:31:13.423 } 00:31:13.423 } 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "method": "nvmf_set_max_subsystems", 00:31:13.423 "params": { 00:31:13.423 "max_subsystems": 1024 00:31:13.423 } 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "method": "nvmf_set_crdt", 00:31:13.423 "params": { 00:31:13.423 "crdt1": 0, 00:31:13.423 "crdt2": 0, 00:31:13.423 "crdt3": 0 00:31:13.423 } 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "method": "nvmf_create_transport", 00:31:13.423 "params": { 00:31:13.423 "trtype": "TCP", 00:31:13.423 "max_queue_depth": 128, 00:31:13.423 "max_io_qpairs_per_ctrlr": 127, 00:31:13.423 "in_capsule_data_size": 4096, 00:31:13.423 "max_io_size": 131072, 00:31:13.423 "io_unit_size": 131072, 00:31:13.423 "max_aq_depth": 128, 00:31:13.423 "num_shared_buffers": 511, 00:31:13.423 "buf_cache_size": 4294967295, 00:31:13.423 "dif_insert_or_strip": false, 00:31:13.423 "zcopy": false, 00:31:13.423 "c2h_success": false, 00:31:13.423 "sock_priority": 0, 00:31:13.423 "abort_timeout_sec": 1, 00:31:13.423 "ack_timeout": 0, 00:31:13.423 "data_wr_pool_size": 0 00:31:13.423 } 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "method": "nvmf_create_subsystem", 00:31:13.423 "params": { 00:31:13.423 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.423 "allow_any_host": false, 00:31:13.423 "serial_number": "SPDK00000000000001", 00:31:13.423 "model_number": "SPDK bdev Controller", 00:31:13.423 "max_namespaces": 10, 00:31:13.423 "min_cntlid": 1, 00:31:13.423 "max_cntlid": 65519, 00:31:13.423 "ana_reporting": false 00:31:13.423 } 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "method": "nvmf_subsystem_add_host", 00:31:13.423 "params": { 00:31:13.423 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.423 "host": "nqn.2016-06.io.spdk:host1", 00:31:13.423 "psk": "/tmp/tmp.sSztWSNBtx" 00:31:13.423 } 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "method": "nvmf_subsystem_add_ns", 00:31:13.423 "params": { 00:31:13.423 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.423 "namespace": { 00:31:13.423 "nsid": 1, 00:31:13.423 "bdev_name": "malloc0", 00:31:13.423 "nguid": "D49B46B58207475EA47A111908A77FF3", 00:31:13.423 "uuid": "d49b46b5-8207-475e-a47a-111908a77ff3", 00:31:13.423 "no_auto_visible": false 00:31:13.423 } 00:31:13.423 } 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "method": "nvmf_subsystem_add_listener", 00:31:13.423 "params": { 00:31:13.423 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.423 "listen_address": { 00:31:13.423 "trtype": "TCP", 00:31:13.423 "adrfam": "IPv4", 00:31:13.423 "traddr": "10.0.0.2", 00:31:13.423 "trsvcid": "4420" 00:31:13.423 }, 00:31:13.423 "secure_channel": true 00:31:13.423 } 00:31:13.423 } 00:31:13.423 ] 00:31:13.423 } 00:31:13.423 ] 00:31:13.423 }' 00:31:13.423 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:13.423 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:13.423 11:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4026068 00:31:13.423 11:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:31:13.423 11:39:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4026068 00:31:13.423 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4026068 ']' 00:31:13.423 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.423 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:13.423 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.423 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:13.423 11:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:13.423 [2024-06-10 11:39:38.331853] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:13.423 [2024-06-10 11:39:38.331916] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.423 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.423 [2024-06-10 11:39:38.448318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.682 [2024-06-10 11:39:38.531594] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.683 [2024-06-10 11:39:38.531635] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.683 [2024-06-10 11:39:38.531649] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:13.683 [2024-06-10 11:39:38.531661] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:13.683 [2024-06-10 11:39:38.531671] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.683 [2024-06-10 11:39:38.531749] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.683 [2024-06-10 11:39:38.740302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:13.683 [2024-06-10 11:39:38.756240] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:13.683 [2024-06-10 11:39:38.772293] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:13.683 [2024-06-10 11:39:38.782758] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=4026240 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 4026240 /var/tmp/bdevperf.sock 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4026240 ']' 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:14.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:14.249 11:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:31:14.249 "subsystems": [ 00:31:14.249 { 00:31:14.249 "subsystem": "keyring", 00:31:14.249 "config": [] 00:31:14.249 }, 00:31:14.249 { 00:31:14.249 "subsystem": "iobuf", 00:31:14.249 "config": [ 00:31:14.249 { 00:31:14.249 "method": "iobuf_set_options", 00:31:14.249 "params": { 00:31:14.249 "small_pool_count": 8192, 00:31:14.249 "large_pool_count": 1024, 00:31:14.249 "small_bufsize": 8192, 00:31:14.249 "large_bufsize": 135168 00:31:14.249 } 00:31:14.249 } 00:31:14.249 ] 00:31:14.249 }, 00:31:14.249 { 00:31:14.249 "subsystem": "sock", 00:31:14.249 "config": [ 00:31:14.249 { 00:31:14.249 "method": "sock_set_default_impl", 00:31:14.249 "params": { 00:31:14.249 "impl_name": "posix" 00:31:14.249 } 00:31:14.249 }, 00:31:14.249 { 00:31:14.249 "method": "sock_impl_set_options", 00:31:14.249 "params": { 00:31:14.249 "impl_name": "ssl", 00:31:14.249 "recv_buf_size": 4096, 00:31:14.249 "send_buf_size": 4096, 00:31:14.249 "enable_recv_pipe": true, 00:31:14.249 "enable_quickack": false, 00:31:14.249 "enable_placement_id": 0, 00:31:14.249 "enable_zerocopy_send_server": true, 00:31:14.249 "enable_zerocopy_send_client": false, 00:31:14.249 "zerocopy_threshold": 0, 00:31:14.249 "tls_version": 0, 00:31:14.249 "enable_ktls": false 00:31:14.249 } 00:31:14.249 }, 00:31:14.249 { 00:31:14.249 "method": "sock_impl_set_options", 00:31:14.249 "params": { 00:31:14.249 "impl_name": "posix", 00:31:14.249 "recv_buf_size": 2097152, 00:31:14.249 "send_buf_size": 2097152, 00:31:14.249 "enable_recv_pipe": true, 00:31:14.249 "enable_quickack": false, 00:31:14.249 "enable_placement_id": 0, 00:31:14.249 "enable_zerocopy_send_server": true, 00:31:14.249 "enable_zerocopy_send_client": false, 00:31:14.249 "zerocopy_threshold": 0, 00:31:14.249 "tls_version": 0, 00:31:14.249 "enable_ktls": false 00:31:14.249 } 00:31:14.249 } 00:31:14.249 ] 00:31:14.249 }, 00:31:14.249 { 00:31:14.249 "subsystem": "vmd", 00:31:14.249 "config": [] 00:31:14.249 }, 00:31:14.249 { 00:31:14.249 "subsystem": "accel", 00:31:14.249 "config": [ 00:31:14.249 { 00:31:14.249 "method": "accel_set_options", 00:31:14.249 "params": { 00:31:14.249 "small_cache_size": 128, 00:31:14.249 "large_cache_size": 16, 00:31:14.249 "task_count": 2048, 00:31:14.249 "sequence_count": 2048, 00:31:14.249 "buf_count": 2048 00:31:14.249 } 00:31:14.249 } 00:31:14.249 ] 00:31:14.249 }, 00:31:14.249 { 00:31:14.249 "subsystem": "bdev", 00:31:14.249 "config": [ 00:31:14.249 { 00:31:14.249 "method": "bdev_set_options", 00:31:14.249 "params": { 00:31:14.249 "bdev_io_pool_size": 65535, 00:31:14.249 "bdev_io_cache_size": 256, 00:31:14.249 "bdev_auto_examine": true, 00:31:14.249 "iobuf_small_cache_size": 128, 00:31:14.249 "iobuf_large_cache_size": 16 00:31:14.249 } 00:31:14.249 }, 00:31:14.249 { 00:31:14.249 "method": "bdev_raid_set_options", 00:31:14.249 "params": { 00:31:14.249 "process_window_size_kb": 1024 00:31:14.249 } 00:31:14.249 }, 00:31:14.249 { 00:31:14.249 "method": "bdev_iscsi_set_options", 00:31:14.249 "params": { 00:31:14.249 "timeout_sec": 30 00:31:14.249 } 00:31:14.249 }, 00:31:14.249 { 00:31:14.249 "method": "bdev_nvme_set_options", 00:31:14.249 "params": { 00:31:14.249 "action_on_timeout": "none", 00:31:14.249 "timeout_us": 0, 00:31:14.249 "timeout_admin_us": 0, 00:31:14.249 "keep_alive_timeout_ms": 10000, 00:31:14.249 "arbitration_burst": 0, 00:31:14.250 "low_priority_weight": 0, 00:31:14.250 "medium_priority_weight": 0, 00:31:14.250 "high_priority_weight": 0, 00:31:14.250 "nvme_adminq_poll_period_us": 10000, 00:31:14.250 "nvme_ioq_poll_period_us": 0, 00:31:14.250 "io_queue_requests": 512, 00:31:14.250 "delay_cmd_submit": true, 00:31:14.250 "transport_retry_count": 4, 00:31:14.250 "bdev_retry_count": 3, 00:31:14.250 "transport_ack_timeout": 0, 00:31:14.250 "ctrlr_loss_timeout_sec": 0, 00:31:14.250 "reconnect_delay_sec": 0, 00:31:14.250 "fast_io_fail_timeout_sec": 0, 00:31:14.250 "disable_auto_failback": false, 00:31:14.250 "generate_uuids": false, 00:31:14.250 "transport_tos": 0, 00:31:14.250 "nvme_error_stat": false, 00:31:14.250 "rdma_srq_size": 0, 00:31:14.250 "io_path_stat": false, 00:31:14.250 "allow_accel_sequence": false, 00:31:14.250 "rdma_max_cq_size": 0, 00:31:14.250 "rdma_cm_event_timeout_ms": 0, 00:31:14.250 "dhchap_digests": [ 00:31:14.250 "sha256", 00:31:14.250 "sha384", 00:31:14.250 "sha512" 00:31:14.250 ], 00:31:14.250 "dhchap_dhgroups": [ 00:31:14.250 "null", 00:31:14.250 "ffdhe2048", 00:31:14.250 "ffdhe3072", 00:31:14.250 "ffdhe4096", 00:31:14.250 "ffdhe6144", 00:31:14.250 "ffdhe8192" 00:31:14.250 ] 00:31:14.250 } 00:31:14.250 }, 00:31:14.250 { 00:31:14.250 "method": "bdev_nvme_attach_controller", 00:31:14.250 "params": { 00:31:14.250 "name": "TLSTEST", 00:31:14.250 "trtype": "TCP", 00:31:14.250 "adrfam": "IPv4", 00:31:14.250 "traddr": "10.0.0.2", 00:31:14.250 "trsvcid": "4420", 00:31:14.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:14.250 "prchk_reftag": false, 00:31:14.250 "prchk_guard": false, 00:31:14.250 "ctrlr_loss_timeout_sec": 0, 00:31:14.250 "reconnect_delay_sec": 0, 00:31:14.250 "fast_io_fail_timeout_sec": 0, 00:31:14.250 "psk": "/tmp/tmp.sSztWSNBtx", 00:31:14.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:14.250 "hdgst": false, 00:31:14.250 "ddgst": false 00:31:14.250 } 00:31:14.250 }, 00:31:14.250 { 00:31:14.250 "method": "bdev_nvme_set_hotplug", 00:31:14.250 "params": { 00:31:14.250 "period_us": 100000, 00:31:14.250 "enable": false 00:31:14.250 } 00:31:14.250 }, 00:31:14.250 { 00:31:14.250 "method": "bdev_wait_for_examine" 00:31:14.250 } 00:31:14.250 ] 00:31:14.250 }, 00:31:14.250 { 00:31:14.250 "subsystem": "nbd", 00:31:14.250 "config": [] 00:31:14.250 } 00:31:14.250 ] 00:31:14.250 }' 00:31:14.250 11:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:14.250 [2024-06-10 11:39:39.323602] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:14.250 [2024-06-10 11:39:39.323666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4026240 ] 00:31:14.508 EAL: No free 2048 kB hugepages reported on node 1 00:31:14.508 [2024-06-10 11:39:39.416901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.508 [2024-06-10 11:39:39.485068] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:14.765 [2024-06-10 11:39:39.627999] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:14.765 [2024-06-10 11:39:39.628079] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:15.330 11:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:15.330 11:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:31:15.330 11:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:31:15.330 Running I/O for 10 seconds... 00:31:25.297 00:31:25.297 Latency(us) 00:31:25.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.297 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:25.297 Verification LBA range: start 0x0 length 0x2000 00:31:25.297 TLSTESTn1 : 10.03 3712.03 14.50 0.00 0.00 34416.20 7130.32 64172.85 00:31:25.297 =================================================================================================================== 00:31:25.297 Total : 3712.03 14.50 0.00 0.00 34416.20 7130.32 64172.85 00:31:25.297 0 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 4026240 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4026240 ']' 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4026240 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4026240 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4026240' 00:31:25.556 killing process with pid 4026240 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4026240 00:31:25.556 Received shutdown signal, test time was about 10.000000 seconds 00:31:25.556 00:31:25.556 Latency(us) 00:31:25.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.556 =================================================================================================================== 00:31:25.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:25.556 [2024-06-10 11:39:50.471871] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4026240 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 4026068 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4026068 ']' 00:31:25.556 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4026068 00:31:25.557 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:25.557 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:25.557 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4026068 00:31:25.815 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:25.815 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:25.815 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4026068' 00:31:25.815 killing process with pid 4026068 00:31:25.815 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4026068 00:31:25.815 [2024-06-10 11:39:50.704994] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:25.815 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4026068 00:31:25.815 11:39:50 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:31:25.815 11:39:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:25.815 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:25.815 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:26.074 11:39:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4028114 00:31:26.074 11:39:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4028114 00:31:26.074 11:39:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:26.074 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4028114 ']' 00:31:26.074 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.074 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:26.074 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.074 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:26.074 11:39:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:26.074 [2024-06-10 11:39:50.973978] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:26.074 [2024-06-10 11:39:50.974041] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.074 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.074 [2024-06-10 11:39:51.100489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.332 [2024-06-10 11:39:51.183263] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.332 [2024-06-10 11:39:51.183304] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.332 [2024-06-10 11:39:51.183318] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.332 [2024-06-10 11:39:51.183330] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.332 [2024-06-10 11:39:51.183340] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.332 [2024-06-10 11:39:51.183366] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.899 11:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:26.899 11:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:31:26.899 11:39:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:26.899 11:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:26.899 11:39:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:26.899 11:39:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:26.899 11:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.sSztWSNBtx 00:31:26.899 11:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sSztWSNBtx 00:31:26.899 11:39:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:27.158 [2024-06-10 11:39:52.127154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.158 11:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:31:27.416 11:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:31:27.416 [2024-06-10 11:39:52.464036] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:27.416 [2024-06-10 11:39:52.464267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.416 11:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:31:27.675 malloc0 00:31:27.675 11:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:27.933 11:39:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sSztWSNBtx 00:31:27.933 [2024-06-10 11:39:52.982546] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:27.933 11:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:31:27.933 11:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=4028648 00:31:27.934 11:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:27.934 11:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 4028648 /var/tmp/bdevperf.sock 00:31:27.934 11:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4028648 ']' 00:31:27.934 11:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:27.934 11:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:27.934 11:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:27.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:27.934 11:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:27.934 11:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:27.934 [2024-06-10 11:39:53.036463] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:27.934 [2024-06-10 11:39:53.036514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4028648 ] 00:31:28.192 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.192 [2024-06-10 11:39:53.132075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.192 [2024-06-10 11:39:53.213528] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.759 11:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:28.759 11:39:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:31:28.759 11:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sSztWSNBtx 00:31:29.018 11:39:53 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:31:29.276 [2024-06-10 11:39:54.148917] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:29.276 nvme0n1 00:31:29.276 11:39:54 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:29.276 Running I/O for 1 seconds... 00:31:30.653 00:31:30.653 Latency(us) 00:31:30.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.653 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:30.653 Verification LBA range: start 0x0 length 0x2000 00:31:30.653 nvme0n1 : 1.03 3553.74 13.88 0.00 0.00 35496.22 9384.76 60397.98 00:31:30.654 =================================================================================================================== 00:31:30.654 Total : 3553.74 13.88 0.00 0.00 35496.22 9384.76 60397.98 00:31:30.654 0 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 4028648 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4028648 ']' 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4028648 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4028648 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4028648' 00:31:30.654 killing process with pid 4028648 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4028648 00:31:30.654 Received shutdown signal, test time was about 1.000000 seconds 00:31:30.654 00:31:30.654 Latency(us) 00:31:30.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.654 =================================================================================================================== 00:31:30.654 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4028648 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 4028114 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4028114 ']' 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4028114 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4028114 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4028114' 00:31:30.654 killing process with pid 4028114 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4028114 00:31:30.654 [2024-06-10 11:39:55.700909] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:30.654 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4028114 00:31:30.912 11:39:55 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:31:30.912 11:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:30.912 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:30.913 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:30.913 11:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4029054 00:31:30.913 11:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4029054 00:31:30.913 11:39:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:30.913 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4029054 ']' 00:31:30.913 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.913 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:30.913 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.913 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:30.913 11:39:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:30.913 [2024-06-10 11:39:55.974813] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:30.913 [2024-06-10 11:39:55.974884] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.171 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.171 [2024-06-10 11:39:56.100346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.172 [2024-06-10 11:39:56.183511] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.172 [2024-06-10 11:39:56.183558] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.172 [2024-06-10 11:39:56.183571] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.172 [2024-06-10 11:39:56.183604] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.172 [2024-06-10 11:39:56.183614] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.172 [2024-06-10 11:39:56.183648] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:32.108 [2024-06-10 11:39:56.924027] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.108 malloc0 00:31:32.108 [2024-06-10 11:39:56.953294] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:32.108 [2024-06-10 11:39:56.953536] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=4029222 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 4029222 /var/tmp/bdevperf.sock 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4029222 ']' 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:32.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:32.108 11:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:32.108 [2024-06-10 11:39:57.034600] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:32.109 [2024-06-10 11:39:57.034658] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029222 ] 00:31:32.109 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.109 [2024-06-10 11:39:57.144570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.367 [2024-06-10 11:39:57.230359] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.966 11:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:32.966 11:39:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:31:32.966 11:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sSztWSNBtx 00:31:33.233 11:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:31:33.497 [2024-06-10 11:39:58.366328] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:33.497 nvme0n1 00:31:33.497 11:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:33.497 Running I/O for 1 seconds... 00:31:34.875 00:31:34.875 Latency(us) 00:31:34.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.875 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:34.875 Verification LBA range: start 0x0 length 0x2000 00:31:34.875 nvme0n1 : 1.03 3758.90 14.68 0.00 0.00 33541.82 9227.47 55364.81 00:31:34.875 =================================================================================================================== 00:31:34.875 Total : 3758.90 14.68 0.00 0.00 33541.82 9227.47 55364.81 00:31:34.875 0 00:31:34.875 11:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:31:34.875 11:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:34.875 11:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:34.875 11:39:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:34.875 11:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:31:34.875 "subsystems": [ 00:31:34.875 { 00:31:34.875 "subsystem": "keyring", 00:31:34.875 "config": [ 00:31:34.875 { 00:31:34.875 "method": "keyring_file_add_key", 00:31:34.875 "params": { 00:31:34.875 "name": "key0", 00:31:34.875 "path": "/tmp/tmp.sSztWSNBtx" 00:31:34.875 } 00:31:34.875 } 00:31:34.875 ] 00:31:34.875 }, 00:31:34.875 { 00:31:34.875 "subsystem": "iobuf", 00:31:34.875 "config": [ 00:31:34.875 { 00:31:34.875 "method": "iobuf_set_options", 00:31:34.875 "params": { 00:31:34.875 "small_pool_count": 8192, 00:31:34.875 "large_pool_count": 1024, 00:31:34.875 "small_bufsize": 8192, 00:31:34.875 "large_bufsize": 135168 00:31:34.875 } 00:31:34.875 } 00:31:34.875 ] 00:31:34.875 }, 00:31:34.875 { 00:31:34.875 "subsystem": "sock", 00:31:34.875 "config": [ 00:31:34.875 { 00:31:34.875 "method": "sock_set_default_impl", 00:31:34.875 "params": { 00:31:34.875 "impl_name": "posix" 00:31:34.875 } 00:31:34.875 }, 00:31:34.875 { 00:31:34.875 "method": "sock_impl_set_options", 00:31:34.875 "params": { 00:31:34.875 "impl_name": "ssl", 00:31:34.875 "recv_buf_size": 4096, 00:31:34.875 "send_buf_size": 4096, 00:31:34.875 "enable_recv_pipe": true, 00:31:34.875 "enable_quickack": false, 00:31:34.875 "enable_placement_id": 0, 00:31:34.875 "enable_zerocopy_send_server": true, 00:31:34.875 "enable_zerocopy_send_client": false, 00:31:34.875 "zerocopy_threshold": 0, 00:31:34.875 "tls_version": 0, 00:31:34.875 "enable_ktls": false 00:31:34.875 } 00:31:34.875 }, 00:31:34.875 { 00:31:34.875 "method": "sock_impl_set_options", 00:31:34.875 "params": { 00:31:34.875 "impl_name": "posix", 00:31:34.875 "recv_buf_size": 2097152, 00:31:34.875 "send_buf_size": 2097152, 00:31:34.875 "enable_recv_pipe": true, 00:31:34.875 "enable_quickack": false, 00:31:34.875 "enable_placement_id": 0, 00:31:34.875 "enable_zerocopy_send_server": true, 00:31:34.875 "enable_zerocopy_send_client": false, 00:31:34.875 "zerocopy_threshold": 0, 00:31:34.875 "tls_version": 0, 00:31:34.875 "enable_ktls": false 00:31:34.875 } 00:31:34.875 } 00:31:34.875 ] 00:31:34.875 }, 00:31:34.875 { 00:31:34.875 "subsystem": "vmd", 00:31:34.875 "config": [] 00:31:34.875 }, 00:31:34.875 { 00:31:34.875 "subsystem": "accel", 00:31:34.875 "config": [ 00:31:34.875 { 00:31:34.875 "method": "accel_set_options", 00:31:34.875 "params": { 00:31:34.875 "small_cache_size": 128, 00:31:34.875 "large_cache_size": 16, 00:31:34.875 "task_count": 2048, 00:31:34.875 "sequence_count": 2048, 00:31:34.875 "buf_count": 2048 00:31:34.875 } 00:31:34.875 } 00:31:34.875 ] 00:31:34.875 }, 00:31:34.875 { 00:31:34.875 "subsystem": "bdev", 00:31:34.875 "config": [ 00:31:34.875 { 00:31:34.875 "method": "bdev_set_options", 00:31:34.875 "params": { 00:31:34.875 "bdev_io_pool_size": 65535, 00:31:34.875 "bdev_io_cache_size": 256, 00:31:34.875 "bdev_auto_examine": true, 00:31:34.875 "iobuf_small_cache_size": 128, 00:31:34.875 "iobuf_large_cache_size": 16 00:31:34.875 } 00:31:34.875 }, 00:31:34.875 { 00:31:34.875 "method": "bdev_raid_set_options", 00:31:34.875 "params": { 00:31:34.875 "process_window_size_kb": 1024 00:31:34.875 } 00:31:34.875 }, 00:31:34.875 { 00:31:34.875 "method": "bdev_iscsi_set_options", 00:31:34.875 "params": { 00:31:34.875 "timeout_sec": 30 00:31:34.875 } 00:31:34.875 }, 00:31:34.875 { 00:31:34.875 "method": "bdev_nvme_set_options", 00:31:34.875 "params": { 00:31:34.875 "action_on_timeout": "none", 00:31:34.875 "timeout_us": 0, 00:31:34.875 "timeout_admin_us": 0, 00:31:34.875 "keep_alive_timeout_ms": 10000, 00:31:34.875 "arbitration_burst": 0, 00:31:34.875 "low_priority_weight": 0, 00:31:34.875 "medium_priority_weight": 0, 00:31:34.875 "high_priority_weight": 0, 00:31:34.875 "nvme_adminq_poll_period_us": 10000, 00:31:34.875 "nvme_ioq_poll_period_us": 0, 00:31:34.875 "io_queue_requests": 0, 00:31:34.875 "delay_cmd_submit": true, 00:31:34.875 "transport_retry_count": 4, 00:31:34.875 "bdev_retry_count": 3, 00:31:34.875 "transport_ack_timeout": 0, 00:31:34.875 "ctrlr_loss_timeout_sec": 0, 00:31:34.875 "reconnect_delay_sec": 0, 00:31:34.875 "fast_io_fail_timeout_sec": 0, 00:31:34.875 "disable_auto_failback": false, 00:31:34.875 "generate_uuids": false, 00:31:34.875 "transport_tos": 0, 00:31:34.875 "nvme_error_stat": false, 00:31:34.875 "rdma_srq_size": 0, 00:31:34.875 "io_path_stat": false, 00:31:34.875 "allow_accel_sequence": false, 00:31:34.875 "rdma_max_cq_size": 0, 00:31:34.875 "rdma_cm_event_timeout_ms": 0, 00:31:34.875 "dhchap_digests": [ 00:31:34.875 "sha256", 00:31:34.875 "sha384", 00:31:34.875 "sha512" 00:31:34.875 ], 00:31:34.875 "dhchap_dhgroups": [ 00:31:34.875 "null", 00:31:34.875 "ffdhe2048", 00:31:34.875 "ffdhe3072", 00:31:34.875 "ffdhe4096", 00:31:34.875 "ffdhe6144", 00:31:34.876 "ffdhe8192" 00:31:34.876 ] 00:31:34.876 } 00:31:34.876 }, 00:31:34.876 { 00:31:34.876 "method": "bdev_nvme_set_hotplug", 00:31:34.876 "params": { 00:31:34.876 "period_us": 100000, 00:31:34.876 "enable": false 00:31:34.876 } 00:31:34.876 }, 00:31:34.876 { 00:31:34.876 "method": "bdev_malloc_create", 00:31:34.876 "params": { 00:31:34.876 "name": "malloc0", 00:31:34.876 "num_blocks": 8192, 00:31:34.876 "block_size": 4096, 00:31:34.876 "physical_block_size": 4096, 00:31:34.876 "uuid": "c5162c15-687a-4e2c-94b9-6beaca278ffa", 00:31:34.876 "optimal_io_boundary": 0 00:31:34.876 } 00:31:34.876 }, 00:31:34.876 { 00:31:34.876 "method": "bdev_wait_for_examine" 00:31:34.876 } 00:31:34.876 ] 00:31:34.876 }, 00:31:34.876 { 00:31:34.876 "subsystem": "nbd", 00:31:34.876 "config": [] 00:31:34.876 }, 00:31:34.876 { 00:31:34.876 "subsystem": "scheduler", 00:31:34.876 "config": [ 00:31:34.876 { 00:31:34.876 "method": "framework_set_scheduler", 00:31:34.876 "params": { 00:31:34.876 "name": "static" 00:31:34.876 } 00:31:34.876 } 00:31:34.876 ] 00:31:34.876 }, 00:31:34.876 { 00:31:34.876 "subsystem": "nvmf", 00:31:34.876 "config": [ 00:31:34.876 { 00:31:34.876 "method": "nvmf_set_config", 00:31:34.876 "params": { 00:31:34.876 "discovery_filter": "match_any", 00:31:34.876 "admin_cmd_passthru": { 00:31:34.876 "identify_ctrlr": false 00:31:34.876 } 00:31:34.876 } 00:31:34.876 }, 00:31:34.876 { 00:31:34.876 "method": "nvmf_set_max_subsystems", 00:31:34.876 "params": { 00:31:34.876 "max_subsystems": 1024 00:31:34.876 } 00:31:34.876 }, 00:31:34.876 { 00:31:34.876 "method": "nvmf_set_crdt", 00:31:34.876 "params": { 00:31:34.876 "crdt1": 0, 00:31:34.876 "crdt2": 0, 00:31:34.876 "crdt3": 0 00:31:34.876 } 00:31:34.876 }, 00:31:34.876 { 00:31:34.876 "method": "nvmf_create_transport", 00:31:34.876 "params": { 00:31:34.876 "trtype": "TCP", 00:31:34.876 "max_queue_depth": 128, 00:31:34.876 "max_io_qpairs_per_ctrlr": 127, 00:31:34.876 "in_capsule_data_size": 4096, 00:31:34.876 "max_io_size": 131072, 00:31:34.876 "io_unit_size": 131072, 00:31:34.876 "max_aq_depth": 128, 00:31:34.876 "num_shared_buffers": 511, 00:31:34.876 "buf_cache_size": 4294967295, 00:31:34.876 "dif_insert_or_strip": false, 00:31:34.876 "zcopy": false, 00:31:34.876 "c2h_success": false, 00:31:34.876 "sock_priority": 0, 00:31:34.876 "abort_timeout_sec": 1, 00:31:34.876 "ack_timeout": 0, 00:31:34.876 "data_wr_pool_size": 0 00:31:34.876 } 00:31:34.876 }, 00:31:34.876 { 00:31:34.876 "method": "nvmf_create_subsystem", 00:31:34.876 "params": { 00:31:34.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:34.876 "allow_any_host": false, 00:31:34.876 "serial_number": "00000000000000000000", 00:31:34.876 "model_number": "SPDK bdev Controller", 00:31:34.876 "max_namespaces": 32, 00:31:34.876 "min_cntlid": 1, 00:31:34.876 "max_cntlid": 65519, 00:31:34.876 "ana_reporting": false 00:31:34.876 } 00:31:34.876 }, 00:31:34.876 { 00:31:34.876 "method": "nvmf_subsystem_add_host", 00:31:34.876 "params": { 00:31:34.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:34.876 "host": "nqn.2016-06.io.spdk:host1", 00:31:34.876 "psk": "key0" 00:31:34.876 } 00:31:34.876 }, 00:31:34.876 { 00:31:34.876 "method": "nvmf_subsystem_add_ns", 00:31:34.876 "params": { 00:31:34.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:34.876 "namespace": { 00:31:34.876 "nsid": 1, 00:31:34.876 "bdev_name": "malloc0", 00:31:34.876 "nguid": "C5162C15687A4E2C94B96BEACA278FFA", 00:31:34.876 "uuid": "c5162c15-687a-4e2c-94b9-6beaca278ffa", 00:31:34.876 "no_auto_visible": false 00:31:34.876 } 00:31:34.876 } 00:31:34.876 }, 00:31:34.876 { 00:31:34.876 "method": "nvmf_subsystem_add_listener", 00:31:34.876 "params": { 00:31:34.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:34.876 "listen_address": { 00:31:34.876 "trtype": "TCP", 00:31:34.876 "adrfam": "IPv4", 00:31:34.876 "traddr": "10.0.0.2", 00:31:34.876 "trsvcid": "4420" 00:31:34.876 }, 00:31:34.876 "secure_channel": true 00:31:34.876 } 00:31:34.876 } 00:31:34.876 ] 00:31:34.876 } 00:31:34.876 ] 00:31:34.876 }' 00:31:34.876 11:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:31:35.136 11:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:31:35.136 "subsystems": [ 00:31:35.136 { 00:31:35.136 "subsystem": "keyring", 00:31:35.136 "config": [ 00:31:35.136 { 00:31:35.136 "method": "keyring_file_add_key", 00:31:35.136 "params": { 00:31:35.136 "name": "key0", 00:31:35.136 "path": "/tmp/tmp.sSztWSNBtx" 00:31:35.136 } 00:31:35.136 } 00:31:35.136 ] 00:31:35.136 }, 00:31:35.136 { 00:31:35.136 "subsystem": "iobuf", 00:31:35.136 "config": [ 00:31:35.136 { 00:31:35.136 "method": "iobuf_set_options", 00:31:35.136 "params": { 00:31:35.136 "small_pool_count": 8192, 00:31:35.136 "large_pool_count": 1024, 00:31:35.136 "small_bufsize": 8192, 00:31:35.136 "large_bufsize": 135168 00:31:35.136 } 00:31:35.136 } 00:31:35.136 ] 00:31:35.136 }, 00:31:35.136 { 00:31:35.136 "subsystem": "sock", 00:31:35.136 "config": [ 00:31:35.136 { 00:31:35.136 "method": "sock_set_default_impl", 00:31:35.136 "params": { 00:31:35.136 "impl_name": "posix" 00:31:35.136 } 00:31:35.136 }, 00:31:35.136 { 00:31:35.136 "method": "sock_impl_set_options", 00:31:35.136 "params": { 00:31:35.136 "impl_name": "ssl", 00:31:35.136 "recv_buf_size": 4096, 00:31:35.136 "send_buf_size": 4096, 00:31:35.136 "enable_recv_pipe": true, 00:31:35.136 "enable_quickack": false, 00:31:35.136 "enable_placement_id": 0, 00:31:35.136 "enable_zerocopy_send_server": true, 00:31:35.136 "enable_zerocopy_send_client": false, 00:31:35.136 "zerocopy_threshold": 0, 00:31:35.136 "tls_version": 0, 00:31:35.136 "enable_ktls": false 00:31:35.136 } 00:31:35.136 }, 00:31:35.136 { 00:31:35.136 "method": "sock_impl_set_options", 00:31:35.136 "params": { 00:31:35.136 "impl_name": "posix", 00:31:35.136 "recv_buf_size": 2097152, 00:31:35.136 "send_buf_size": 2097152, 00:31:35.136 "enable_recv_pipe": true, 00:31:35.136 "enable_quickack": false, 00:31:35.136 "enable_placement_id": 0, 00:31:35.136 "enable_zerocopy_send_server": true, 00:31:35.136 "enable_zerocopy_send_client": false, 00:31:35.136 "zerocopy_threshold": 0, 00:31:35.136 "tls_version": 0, 00:31:35.136 "enable_ktls": false 00:31:35.136 } 00:31:35.136 } 00:31:35.136 ] 00:31:35.136 }, 00:31:35.136 { 00:31:35.136 "subsystem": "vmd", 00:31:35.136 "config": [] 00:31:35.136 }, 00:31:35.136 { 00:31:35.136 "subsystem": "accel", 00:31:35.136 "config": [ 00:31:35.136 { 00:31:35.136 "method": "accel_set_options", 00:31:35.136 "params": { 00:31:35.136 "small_cache_size": 128, 00:31:35.136 "large_cache_size": 16, 00:31:35.136 "task_count": 2048, 00:31:35.136 "sequence_count": 2048, 00:31:35.136 "buf_count": 2048 00:31:35.136 } 00:31:35.136 } 00:31:35.136 ] 00:31:35.136 }, 00:31:35.136 { 00:31:35.136 "subsystem": "bdev", 00:31:35.136 "config": [ 00:31:35.136 { 00:31:35.136 "method": "bdev_set_options", 00:31:35.136 "params": { 00:31:35.136 "bdev_io_pool_size": 65535, 00:31:35.136 "bdev_io_cache_size": 256, 00:31:35.136 "bdev_auto_examine": true, 00:31:35.136 "iobuf_small_cache_size": 128, 00:31:35.136 "iobuf_large_cache_size": 16 00:31:35.136 } 00:31:35.136 }, 00:31:35.136 { 00:31:35.136 "method": "bdev_raid_set_options", 00:31:35.136 "params": { 00:31:35.136 "process_window_size_kb": 1024 00:31:35.136 } 00:31:35.136 }, 00:31:35.136 { 00:31:35.136 "method": "bdev_iscsi_set_options", 00:31:35.136 "params": { 00:31:35.136 "timeout_sec": 30 00:31:35.136 } 00:31:35.136 }, 00:31:35.136 { 00:31:35.136 "method": "bdev_nvme_set_options", 00:31:35.136 "params": { 00:31:35.136 "action_on_timeout": "none", 00:31:35.136 "timeout_us": 0, 00:31:35.136 "timeout_admin_us": 0, 00:31:35.136 "keep_alive_timeout_ms": 10000, 00:31:35.136 "arbitration_burst": 0, 00:31:35.136 "low_priority_weight": 0, 00:31:35.136 "medium_priority_weight": 0, 00:31:35.136 "high_priority_weight": 0, 00:31:35.136 "nvme_adminq_poll_period_us": 10000, 00:31:35.136 "nvme_ioq_poll_period_us": 0, 00:31:35.136 "io_queue_requests": 512, 00:31:35.136 "delay_cmd_submit": true, 00:31:35.136 "transport_retry_count": 4, 00:31:35.136 "bdev_retry_count": 3, 00:31:35.136 "transport_ack_timeout": 0, 00:31:35.136 "ctrlr_loss_timeout_sec": 0, 00:31:35.136 "reconnect_delay_sec": 0, 00:31:35.136 "fast_io_fail_timeout_sec": 0, 00:31:35.136 "disable_auto_failback": false, 00:31:35.136 "generate_uuids": false, 00:31:35.136 "transport_tos": 0, 00:31:35.136 "nvme_error_stat": false, 00:31:35.136 "rdma_srq_size": 0, 00:31:35.136 "io_path_stat": false, 00:31:35.136 "allow_accel_sequence": false, 00:31:35.136 "rdma_max_cq_size": 0, 00:31:35.136 "rdma_cm_event_timeout_ms": 0, 00:31:35.136 "dhchap_digests": [ 00:31:35.136 "sha256", 00:31:35.136 "sha384", 00:31:35.136 "sha512" 00:31:35.136 ], 00:31:35.136 "dhchap_dhgroups": [ 00:31:35.136 "null", 00:31:35.136 "ffdhe2048", 00:31:35.136 "ffdhe3072", 00:31:35.136 "ffdhe4096", 00:31:35.136 "ffdhe6144", 00:31:35.136 "ffdhe8192" 00:31:35.136 ] 00:31:35.136 } 00:31:35.136 }, 00:31:35.136 { 00:31:35.136 "method": "bdev_nvme_attach_controller", 00:31:35.136 "params": { 00:31:35.136 "name": "nvme0", 00:31:35.136 "trtype": "TCP", 00:31:35.136 "adrfam": "IPv4", 00:31:35.136 "traddr": "10.0.0.2", 00:31:35.136 "trsvcid": "4420", 00:31:35.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:35.136 "prchk_reftag": false, 00:31:35.136 "prchk_guard": false, 00:31:35.136 "ctrlr_loss_timeout_sec": 0, 00:31:35.136 "reconnect_delay_sec": 0, 00:31:35.136 "fast_io_fail_timeout_sec": 0, 00:31:35.136 "psk": "key0", 00:31:35.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:35.136 "hdgst": false, 00:31:35.136 "ddgst": false 00:31:35.136 } 00:31:35.136 }, 00:31:35.136 { 00:31:35.136 "method": "bdev_nvme_set_hotplug", 00:31:35.136 "params": { 00:31:35.136 "period_us": 100000, 00:31:35.136 "enable": false 00:31:35.136 } 00:31:35.136 }, 00:31:35.136 { 00:31:35.136 "method": "bdev_enable_histogram", 00:31:35.137 "params": { 00:31:35.137 "name": "nvme0n1", 00:31:35.137 "enable": true 00:31:35.137 } 00:31:35.137 }, 00:31:35.137 { 00:31:35.137 "method": "bdev_wait_for_examine" 00:31:35.137 } 00:31:35.137 ] 00:31:35.137 }, 00:31:35.137 { 00:31:35.137 "subsystem": "nbd", 00:31:35.137 "config": [] 00:31:35.137 } 00:31:35.137 ] 00:31:35.137 }' 00:31:35.137 11:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 4029222 00:31:35.137 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4029222 ']' 00:31:35.137 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4029222 00:31:35.137 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:35.137 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:35.137 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4029222 00:31:35.137 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:35.137 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:35.137 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4029222' 00:31:35.137 killing process with pid 4029222 00:31:35.137 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4029222 00:31:35.137 Received shutdown signal, test time was about 1.000000 seconds 00:31:35.137 00:31:35.137 Latency(us) 00:31:35.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.137 =================================================================================================================== 00:31:35.137 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:35.137 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4029222 00:31:35.396 11:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 4029054 00:31:35.396 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4029054 ']' 00:31:35.396 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4029054 00:31:35.396 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:35.396 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:35.396 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4029054 00:31:35.396 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:35.396 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:35.396 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4029054' 00:31:35.396 killing process with pid 4029054 00:31:35.396 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4029054 00:31:35.396 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4029054 00:31:35.655 11:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:31:35.655 11:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:35.655 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:35.655 11:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:31:35.655 "subsystems": [ 00:31:35.655 { 00:31:35.655 "subsystem": "keyring", 00:31:35.655 "config": [ 00:31:35.655 { 00:31:35.655 "method": "keyring_file_add_key", 00:31:35.655 "params": { 00:31:35.655 "name": "key0", 00:31:35.655 "path": "/tmp/tmp.sSztWSNBtx" 00:31:35.655 } 00:31:35.655 } 00:31:35.655 ] 00:31:35.655 }, 00:31:35.655 { 00:31:35.655 "subsystem": "iobuf", 00:31:35.655 "config": [ 00:31:35.655 { 00:31:35.655 "method": "iobuf_set_options", 00:31:35.655 "params": { 00:31:35.655 "small_pool_count": 8192, 00:31:35.656 "large_pool_count": 1024, 00:31:35.656 "small_bufsize": 8192, 00:31:35.656 "large_bufsize": 135168 00:31:35.656 } 00:31:35.656 } 00:31:35.656 ] 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "subsystem": "sock", 00:31:35.656 "config": [ 00:31:35.656 { 00:31:35.656 "method": "sock_set_default_impl", 00:31:35.656 "params": { 00:31:35.656 "impl_name": "posix" 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "sock_impl_set_options", 00:31:35.656 "params": { 00:31:35.656 "impl_name": "ssl", 00:31:35.656 "recv_buf_size": 4096, 00:31:35.656 "send_buf_size": 4096, 00:31:35.656 "enable_recv_pipe": true, 00:31:35.656 "enable_quickack": false, 00:31:35.656 "enable_placement_id": 0, 00:31:35.656 "enable_zerocopy_send_server": true, 00:31:35.656 "enable_zerocopy_send_client": false, 00:31:35.656 "zerocopy_threshold": 0, 00:31:35.656 "tls_version": 0, 00:31:35.656 "enable_ktls": false 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "sock_impl_set_options", 00:31:35.656 "params": { 00:31:35.656 "impl_name": "posix", 00:31:35.656 "recv_buf_size": 2097152, 00:31:35.656 "send_buf_size": 2097152, 00:31:35.656 "enable_recv_pipe": true, 00:31:35.656 "enable_quickack": false, 00:31:35.656 "enable_placement_id": 0, 00:31:35.656 "enable_zerocopy_send_server": true, 00:31:35.656 "enable_zerocopy_send_client": false, 00:31:35.656 "zerocopy_threshold": 0, 00:31:35.656 "tls_version": 0, 00:31:35.656 "enable_ktls": false 00:31:35.656 } 00:31:35.656 } 00:31:35.656 ] 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "subsystem": "vmd", 00:31:35.656 "config": [] 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "subsystem": "accel", 00:31:35.656 "config": [ 00:31:35.656 { 00:31:35.656 "method": "accel_set_options", 00:31:35.656 "params": { 00:31:35.656 "small_cache_size": 128, 00:31:35.656 "large_cache_size": 16, 00:31:35.656 "task_count": 2048, 00:31:35.656 "sequence_count": 2048, 00:31:35.656 "buf_count": 2048 00:31:35.656 } 00:31:35.656 } 00:31:35.656 ] 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "subsystem": "bdev", 00:31:35.656 "config": [ 00:31:35.656 { 00:31:35.656 "method": "bdev_set_options", 00:31:35.656 "params": { 00:31:35.656 "bdev_io_pool_size": 65535, 00:31:35.656 "bdev_io_cache_size": 256, 00:31:35.656 "bdev_auto_examine": true, 00:31:35.656 "iobuf_small_cache_size": 128, 00:31:35.656 "iobuf_large_cache_size": 16 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "bdev_raid_set_options", 00:31:35.656 "params": { 00:31:35.656 "process_window_size_kb": 1024 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "bdev_iscsi_set_options", 00:31:35.656 "params": { 00:31:35.656 "timeout_sec": 30 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "bdev_nvme_set_options", 00:31:35.656 "params": { 00:31:35.656 "action_on_timeout": "none", 00:31:35.656 "timeout_us": 0, 00:31:35.656 "timeout_admin_us": 0, 00:31:35.656 "keep_alive_timeout_ms": 10000, 00:31:35.656 "arbitration_burst": 0, 00:31:35.656 "low_priority_weight": 0, 00:31:35.656 "medium_priority_weight": 0, 00:31:35.656 "high_priority_weight": 0, 00:31:35.656 "nvme_adminq_poll_period_us": 10000, 00:31:35.656 "nvme_ioq_poll_period_us": 0, 00:31:35.656 "io_queue_requests": 0, 00:31:35.656 "delay_cmd_submit": true, 00:31:35.656 "transport_retry_count": 4, 00:31:35.656 "bdev_retry_count": 3, 00:31:35.656 "transport_ack_timeout": 0, 00:31:35.656 "ctrlr_loss_timeout_sec": 0, 00:31:35.656 "reconnect_delay_sec": 0, 00:31:35.656 "fast_io_fail_timeout_sec": 0, 00:31:35.656 "disable_auto_failback": false, 00:31:35.656 "generate_uuids": false, 00:31:35.656 "transport_tos": 0, 00:31:35.656 "nvme_error_stat": false, 00:31:35.656 "rdma_srq_size": 0, 00:31:35.656 "io_path_stat": false, 00:31:35.656 "allow_accel_sequence": false, 00:31:35.656 "rdma_max_cq_size": 0, 00:31:35.656 "rdma_cm_event_timeout_ms": 0, 00:31:35.656 "dhchap_digests": [ 00:31:35.656 "sha256", 00:31:35.656 "sha384", 00:31:35.656 "sha512" 00:31:35.656 ], 00:31:35.656 "dhchap_dhgroups": [ 00:31:35.656 "null", 00:31:35.656 "ffdhe2048", 00:31:35.656 "ffdhe3072", 00:31:35.656 "ffdhe4096", 00:31:35.656 "ffdhe6144", 00:31:35.656 "ffdhe8192" 00:31:35.656 ] 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "bdev_nvme_set_hotplug", 00:31:35.656 "params": { 00:31:35.656 "period_us": 100000, 00:31:35.656 "enable": false 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "bdev_malloc_create", 00:31:35.656 "params": { 00:31:35.656 "name": "malloc0", 00:31:35.656 "num_blocks": 8192, 00:31:35.656 "block_size": 4096, 00:31:35.656 "physical_block_size": 4096, 00:31:35.656 "uuid": "c5162c15-687a-4e2c-94b9-6beaca278ffa", 00:31:35.656 "optimal_io_boundary": 0 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "bdev_wait_for_examine" 00:31:35.656 } 00:31:35.656 ] 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "subsystem": "nbd", 00:31:35.656 "config": [] 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "subsystem": "scheduler", 00:31:35.656 "config": [ 00:31:35.656 { 00:31:35.656 "method": "framework_set_scheduler", 00:31:35.656 "params": { 00:31:35.656 "name": "static" 00:31:35.656 } 00:31:35.656 } 00:31:35.656 ] 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "subsystem": "nvmf", 00:31:35.656 "config": [ 00:31:35.656 { 00:31:35.656 "method": "nvmf_set_config", 00:31:35.656 "params": { 00:31:35.656 "discovery_filter": "match_any", 00:31:35.656 "admin_cmd_passthru": { 00:31:35.656 "identify_ctrlr": false 00:31:35.656 } 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "nvmf_set_max_subsystems", 00:31:35.656 "params": { 00:31:35.656 "max_subsystems": 1024 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "nvmf_set_crdt", 00:31:35.656 "params": { 00:31:35.656 "crdt1": 0, 00:31:35.656 "crdt2": 0, 00:31:35.656 "crdt3": 0 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "nvmf_create_transport", 00:31:35.656 "params": { 00:31:35.656 "trtype": "TCP", 00:31:35.656 "max_queue_depth": 128, 00:31:35.656 "max_io_qpairs_per_ctrlr": 127, 00:31:35.656 "in_capsule_data_size": 4096, 00:31:35.656 "max_io_size": 131072, 00:31:35.656 "io_unit_size": 131072, 00:31:35.656 "max_aq_depth": 128, 00:31:35.656 "num_shared_buffers": 511, 00:31:35.656 "buf_cache_size": 4294967295, 00:31:35.656 "dif_insert_or_strip": false, 00:31:35.656 "zcopy": false, 00:31:35.656 "c2h_success": false, 00:31:35.656 "sock_priority": 0, 00:31:35.656 "abort_timeout_sec": 1, 00:31:35.656 "ack_timeout": 0, 00:31:35.656 "data_wr_pool_size": 0 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "nvmf_create_subsystem", 00:31:35.656 "params": { 00:31:35.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:35.656 "allow_any_host": false, 00:31:35.656 "serial_number": "00000000000000000000", 00:31:35.656 "model_number": "SPDK bdev Controller", 00:31:35.656 "max_namespaces": 32, 00:31:35.656 "min_cntlid": 1, 00:31:35.656 "max_cntlid": 65519, 00:31:35.656 "ana_reporting": false 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "nvmf_subsystem_add_host", 00:31:35.656 "params": { 00:31:35.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:35.656 "host": "nqn.2016-06.io.spdk:host1", 00:31:35.656 "psk": "key0" 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "nvmf_subsystem_add_ns", 00:31:35.656 "params": { 00:31:35.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:35.656 "namespace": { 00:31:35.656 "nsid": 1, 00:31:35.656 "bdev_name": "malloc0", 00:31:35.656 "nguid": "C5162C15687A4E2C94B96BEACA278FFA", 00:31:35.656 "uuid": "c5162c15-687a-4e2c-94b9-6beaca278ffa", 00:31:35.656 "no_auto_visible": false 00:31:35.656 } 00:31:35.656 } 00:31:35.656 }, 00:31:35.656 { 00:31:35.656 "method": "nvmf_subsystem_add_listener", 00:31:35.656 "params": { 00:31:35.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:35.656 "listen_address": { 00:31:35.656 "trtype": "TCP", 00:31:35.656 "adrfam": "IPv4", 00:31:35.656 "traddr": "10.0.0.2", 00:31:35.656 "trsvcid": "4420" 00:31:35.656 }, 00:31:35.656 "secure_channel": true 00:31:35.656 } 00:31:35.656 } 00:31:35.656 ] 00:31:35.656 } 00:31:35.656 ] 00:31:35.656 }' 00:31:35.656 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:35.656 11:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4029804 00:31:35.656 11:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4029804 00:31:35.657 11:40:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:31:35.657 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4029804 ']' 00:31:35.657 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.657 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:35.657 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.657 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:35.657 11:40:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:35.657 [2024-06-10 11:40:00.639276] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:35.657 [2024-06-10 11:40:00.639340] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.657 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.916 [2024-06-10 11:40:00.765440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.916 [2024-06-10 11:40:00.847939] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.916 [2024-06-10 11:40:00.847988] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.916 [2024-06-10 11:40:00.848002] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.916 [2024-06-10 11:40:00.848014] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.916 [2024-06-10 11:40:00.848024] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.916 [2024-06-10 11:40:00.848094] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.174 [2024-06-10 11:40:01.066246] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.174 [2024-06-10 11:40:01.098247] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:36.174 [2024-06-10 11:40:01.108932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.433 11:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:36.433 11:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:31:36.433 11:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:36.433 11:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:36.433 11:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:36.693 11:40:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.693 11:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=4030061 00:31:36.693 11:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 4030061 /var/tmp/bdevperf.sock 00:31:36.693 11:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 4030061 ']' 00:31:36.693 11:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:36.693 11:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:36.693 11:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:31:36.693 11:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:36.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:36.693 11:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:36.693 11:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:31:36.693 "subsystems": [ 00:31:36.693 { 00:31:36.693 "subsystem": "keyring", 00:31:36.693 "config": [ 00:31:36.693 { 00:31:36.693 "method": "keyring_file_add_key", 00:31:36.693 "params": { 00:31:36.693 "name": "key0", 00:31:36.693 "path": "/tmp/tmp.sSztWSNBtx" 00:31:36.693 } 00:31:36.693 } 00:31:36.693 ] 00:31:36.693 }, 00:31:36.693 { 00:31:36.693 "subsystem": "iobuf", 00:31:36.693 "config": [ 00:31:36.693 { 00:31:36.693 "method": "iobuf_set_options", 00:31:36.693 "params": { 00:31:36.693 "small_pool_count": 8192, 00:31:36.693 "large_pool_count": 1024, 00:31:36.693 "small_bufsize": 8192, 00:31:36.693 "large_bufsize": 135168 00:31:36.693 } 00:31:36.693 } 00:31:36.693 ] 00:31:36.693 }, 00:31:36.693 { 00:31:36.693 "subsystem": "sock", 00:31:36.693 "config": [ 00:31:36.693 { 00:31:36.693 "method": "sock_set_default_impl", 00:31:36.693 "params": { 00:31:36.693 "impl_name": "posix" 00:31:36.693 } 00:31:36.693 }, 00:31:36.693 { 00:31:36.693 "method": "sock_impl_set_options", 00:31:36.693 "params": { 00:31:36.693 "impl_name": "ssl", 00:31:36.693 "recv_buf_size": 4096, 00:31:36.693 "send_buf_size": 4096, 00:31:36.693 "enable_recv_pipe": true, 00:31:36.693 "enable_quickack": false, 00:31:36.693 "enable_placement_id": 0, 00:31:36.693 "enable_zerocopy_send_server": true, 00:31:36.693 "enable_zerocopy_send_client": false, 00:31:36.693 "zerocopy_threshold": 0, 00:31:36.693 "tls_version": 0, 00:31:36.693 "enable_ktls": false 00:31:36.693 } 00:31:36.693 }, 00:31:36.693 { 00:31:36.693 "method": "sock_impl_set_options", 00:31:36.693 "params": { 00:31:36.693 "impl_name": "posix", 00:31:36.693 "recv_buf_size": 2097152, 00:31:36.693 "send_buf_size": 2097152, 00:31:36.693 "enable_recv_pipe": true, 00:31:36.693 "enable_quickack": false, 00:31:36.693 "enable_placement_id": 0, 00:31:36.693 "enable_zerocopy_send_server": true, 00:31:36.693 "enable_zerocopy_send_client": false, 00:31:36.693 "zerocopy_threshold": 0, 00:31:36.693 "tls_version": 0, 00:31:36.693 "enable_ktls": false 00:31:36.693 } 00:31:36.693 } 00:31:36.693 ] 00:31:36.693 }, 00:31:36.693 { 00:31:36.693 "subsystem": "vmd", 00:31:36.693 "config": [] 00:31:36.693 }, 00:31:36.693 { 00:31:36.693 "subsystem": "accel", 00:31:36.693 "config": [ 00:31:36.693 { 00:31:36.693 "method": "accel_set_options", 00:31:36.693 "params": { 00:31:36.693 "small_cache_size": 128, 00:31:36.693 "large_cache_size": 16, 00:31:36.693 "task_count": 2048, 00:31:36.693 "sequence_count": 2048, 00:31:36.693 "buf_count": 2048 00:31:36.693 } 00:31:36.693 } 00:31:36.693 ] 00:31:36.693 }, 00:31:36.693 { 00:31:36.693 "subsystem": "bdev", 00:31:36.694 "config": [ 00:31:36.694 { 00:31:36.694 "method": "bdev_set_options", 00:31:36.694 "params": { 00:31:36.694 "bdev_io_pool_size": 65535, 00:31:36.694 "bdev_io_cache_size": 256, 00:31:36.694 "bdev_auto_examine": true, 00:31:36.694 "iobuf_small_cache_size": 128, 00:31:36.694 "iobuf_large_cache_size": 16 00:31:36.694 } 00:31:36.694 }, 00:31:36.694 { 00:31:36.694 "method": "bdev_raid_set_options", 00:31:36.694 "params": { 00:31:36.694 "process_window_size_kb": 1024 00:31:36.694 } 00:31:36.694 }, 00:31:36.694 { 00:31:36.694 "method": "bdev_iscsi_set_options", 00:31:36.694 "params": { 00:31:36.694 "timeout_sec": 30 00:31:36.694 } 00:31:36.694 }, 00:31:36.694 { 00:31:36.694 "method": "bdev_nvme_set_options", 00:31:36.694 "params": { 00:31:36.694 "action_on_timeout": "none", 00:31:36.694 "timeout_us": 0, 00:31:36.694 "timeout_admin_us": 0, 00:31:36.694 "keep_alive_timeout_ms": 10000, 00:31:36.694 "arbitration_burst": 0, 00:31:36.694 "low_priority_weight": 0, 00:31:36.694 "medium_priority_weight": 0, 00:31:36.694 "high_priority_weight": 0, 00:31:36.694 "nvme_adminq_poll_period_us": 10000, 00:31:36.694 "nvme_ioq_poll_period_us": 0, 00:31:36.694 "io_queue_requests": 512, 00:31:36.694 "delay_cmd_submit": true, 00:31:36.694 "transport_retry_count": 4, 00:31:36.694 "bdev_retry_count": 3, 00:31:36.694 "transport_ack_timeout": 0, 00:31:36.694 "ctrlr_loss_timeout_sec": 0, 00:31:36.694 "reconnect_delay_sec": 0, 00:31:36.694 "fast_io_fail_timeout_sec": 0, 00:31:36.694 "disable_auto_failback": false, 00:31:36.694 "generate_uuids": false, 00:31:36.694 "transport_tos": 0, 00:31:36.694 "nvme_error_stat": false, 00:31:36.694 "rdma_srq_size": 0, 00:31:36.694 "io_path_stat": false, 00:31:36.694 "allow_accel_sequence": false, 00:31:36.694 "rdma_max_cq_size": 0, 00:31:36.694 "rdma_cm_event_timeout_ms": 0, 00:31:36.694 "dhchap_digests": [ 00:31:36.694 "sha256", 00:31:36.694 "sha384", 00:31:36.694 "sh 11:40:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:36.694 a512" 00:31:36.694 ], 00:31:36.694 "dhchap_dhgroups": [ 00:31:36.694 "null", 00:31:36.694 "ffdhe2048", 00:31:36.694 "ffdhe3072", 00:31:36.694 "ffdhe4096", 00:31:36.694 "ffdhe6144", 00:31:36.694 "ffdhe8192" 00:31:36.694 ] 00:31:36.694 } 00:31:36.694 }, 00:31:36.694 { 00:31:36.694 "method": "bdev_nvme_attach_controller", 00:31:36.694 "params": { 00:31:36.694 "name": "nvme0", 00:31:36.694 "trtype": "TCP", 00:31:36.694 "adrfam": "IPv4", 00:31:36.694 "traddr": "10.0.0.2", 00:31:36.694 "trsvcid": "4420", 00:31:36.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:36.694 "prchk_reftag": false, 00:31:36.694 "prchk_guard": false, 00:31:36.694 "ctrlr_loss_timeout_sec": 0, 00:31:36.694 "reconnect_delay_sec": 0, 00:31:36.694 "fast_io_fail_timeout_sec": 0, 00:31:36.694 "psk": "key0", 00:31:36.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:36.694 "hdgst": false, 00:31:36.694 "ddgst": false 00:31:36.694 } 00:31:36.694 }, 00:31:36.694 { 00:31:36.694 "method": "bdev_nvme_set_hotplug", 00:31:36.694 "params": { 00:31:36.694 "period_us": 100000, 00:31:36.694 "enable": false 00:31:36.694 } 00:31:36.694 }, 00:31:36.694 { 00:31:36.694 "method": "bdev_enable_histogram", 00:31:36.694 "params": { 00:31:36.694 "name": "nvme0n1", 00:31:36.694 "enable": true 00:31:36.694 } 00:31:36.694 }, 00:31:36.694 { 00:31:36.694 "method": "bdev_wait_for_examine" 00:31:36.694 } 00:31:36.694 ] 00:31:36.694 }, 00:31:36.694 { 00:31:36.694 "subsystem": "nbd", 00:31:36.694 "config": [] 00:31:36.694 } 00:31:36.694 ] 00:31:36.694 }' 00:31:36.694 [2024-06-10 11:40:01.633156] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:36.694 [2024-06-10 11:40:01.633219] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030061 ] 00:31:36.694 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.694 [2024-06-10 11:40:01.745914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.953 [2024-06-10 11:40:01.831089] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.953 [2024-06-10 11:40:01.986714] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:37.517 11:40:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:37.517 11:40:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:31:37.518 11:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:37.518 11:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:31:37.776 11:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.776 11:40:02 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:37.776 Running I/O for 1 seconds... 00:31:39.151 00:31:39.151 Latency(us) 00:31:39.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.151 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:39.151 Verification LBA range: start 0x0 length 0x2000 00:31:39.151 nvme0n1 : 1.03 3559.07 13.90 0.00 0.00 35439.95 9542.04 51799.65 00:31:39.151 =================================================================================================================== 00:31:39.151 Total : 3559.07 13.90 0.00 0.00 35439.95 9542.04 51799.65 00:31:39.151 0 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:39.151 nvmf_trace.0 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 4030061 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4030061 ']' 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4030061 00:31:39.151 11:40:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:39.151 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:39.151 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4030061 00:31:39.151 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:31:39.151 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:31:39.151 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4030061' 00:31:39.151 killing process with pid 4030061 00:31:39.151 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4030061 00:31:39.151 Received shutdown signal, test time was about 1.000000 seconds 00:31:39.151 00:31:39.151 Latency(us) 00:31:39.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.151 =================================================================================================================== 00:31:39.151 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:39.151 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4030061 00:31:39.151 11:40:04 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:31:39.151 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:39.152 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:31:39.152 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:39.152 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:31:39.152 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:39.152 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:39.410 rmmod nvme_tcp 00:31:39.410 rmmod nvme_fabrics 00:31:39.410 rmmod nvme_keyring 00:31:39.410 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 4029804 ']' 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 4029804 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 4029804 ']' 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 4029804 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4029804 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4029804' 00:31:39.411 killing process with pid 4029804 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 4029804 00:31:39.411 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 4029804 00:31:39.670 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:39.670 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:39.670 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:39.670 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:39.670 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:39.670 11:40:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.670 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:39.670 11:40:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.576 11:40:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:41.576 11:40:06 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.qMFXYtCbeP /tmp/tmp.0c15iJPKou /tmp/tmp.sSztWSNBtx 00:31:41.576 00:31:41.576 real 1m32.055s 00:31:41.576 user 2m16.614s 00:31:41.576 sys 0m36.212s 00:31:41.576 11:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:41.576 11:40:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:31:41.576 ************************************ 00:31:41.576 END TEST nvmf_tls 00:31:41.576 ************************************ 00:31:41.835 11:40:06 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:31:41.835 11:40:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:41.835 11:40:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:41.835 11:40:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:41.835 ************************************ 00:31:41.835 START TEST nvmf_fips 00:31:41.835 ************************************ 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:31:41.835 * Looking for test storage... 00:31:41.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.835 11:40:06 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:41.836 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:31:42.095 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:31:42.095 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:31:42.095 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:31:42.096 11:40:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:31:42.096 Error setting digest 00:31:42.096 00F2231C547F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:31:42.096 00F2231C547F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:31:42.096 11:40:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:52.081 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:52.082 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:52.082 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:52.082 Found net devices under 0000:af:00.0: cvl_0_0 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:52.082 Found net devices under 0000:af:00.1: cvl_0_1 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:52.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:52.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:31:52.082 00:31:52.082 --- 10.0.0.2 ping statistics --- 00:31:52.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.082 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:52.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:52.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:31:52.082 00:31:52.082 --- 10.0.0.1 ping statistics --- 00:31:52.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.082 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=4035056 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 4035056 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 4035056 ']' 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:52.082 11:40:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:31:52.082 [2024-06-10 11:40:15.901497] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:52.082 [2024-06-10 11:40:15.901559] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:52.082 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.082 [2024-06-10 11:40:16.018484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.082 [2024-06-10 11:40:16.101082] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:52.082 [2024-06-10 11:40:16.101130] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:52.082 [2024-06-10 11:40:16.101144] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:52.082 [2024-06-10 11:40:16.101156] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:52.082 [2024-06-10 11:40:16.101166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:52.082 [2024-06-10 11:40:16.101195] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:52.082 11:40:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:52.082 11:40:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:31:52.082 11:40:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:52.083 11:40:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:52.083 11:40:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:31:52.083 11:40:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:52.083 11:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:31:52.083 11:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:31:52.083 11:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:31:52.083 11:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:31:52.083 11:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:31:52.083 11:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:31:52.083 11:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:31:52.083 11:40:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:52.083 [2024-06-10 11:40:17.041676] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:52.083 [2024-06-10 11:40:17.057672] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:52.083 [2024-06-10 11:40:17.057921] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:52.083 [2024-06-10 11:40:17.086908] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:52.083 malloc0 00:31:52.083 11:40:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:52.083 11:40:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=4035298 00:31:52.083 11:40:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:31:52.083 11:40:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 4035298 /var/tmp/bdevperf.sock 00:31:52.083 11:40:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 4035298 ']' 00:31:52.083 11:40:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:52.083 11:40:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:52.083 11:40:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:52.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:52.083 11:40:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:52.083 11:40:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:31:52.342 [2024-06-10 11:40:17.190566] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:31:52.342 [2024-06-10 11:40:17.190644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4035298 ] 00:31:52.342 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.342 [2024-06-10 11:40:17.285498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.342 [2024-06-10 11:40:17.354806] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:53.276 11:40:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:53.276 11:40:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:31:53.276 11:40:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:31:53.276 [2024-06-10 11:40:18.292803] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:53.276 [2024-06-10 11:40:18.292896] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:53.276 TLSTESTn1 00:31:53.550 11:40:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:53.550 Running I/O for 10 seconds... 00:32:03.516 00:32:03.516 Latency(us) 00:32:03.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.516 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:03.516 Verification LBA range: start 0x0 length 0x2000 00:32:03.516 TLSTESTn1 : 10.03 3737.90 14.60 0.00 0.00 34175.79 6474.96 50121.93 00:32:03.516 =================================================================================================================== 00:32:03.516 Total : 3737.90 14.60 0.00 0.00 34175.79 6474.96 50121.93 00:32:03.516 0 00:32:03.516 11:40:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:32:03.516 11:40:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:32:03.516 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:32:03.516 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:32:03.516 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:32:03.516 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:03.516 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:32:03.516 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:32:03.516 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:32:03.516 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:03.516 nvmf_trace.0 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4035298 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 4035298 ']' 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 4035298 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4035298 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4035298' 00:32:03.857 killing process with pid 4035298 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 4035298 00:32:03.857 Received shutdown signal, test time was about 10.000000 seconds 00:32:03.857 00:32:03.857 Latency(us) 00:32:03.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.857 =================================================================================================================== 00:32:03.857 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:03.857 [2024-06-10 11:40:28.725486] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 4035298 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:03.857 rmmod nvme_tcp 00:32:03.857 rmmod nvme_fabrics 00:32:03.857 rmmod nvme_keyring 00:32:03.857 11:40:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:04.116 11:40:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:32:04.116 11:40:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:32:04.116 11:40:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 4035056 ']' 00:32:04.116 11:40:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 4035056 00:32:04.116 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 4035056 ']' 00:32:04.116 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 4035056 00:32:04.116 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:32:04.116 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:04.116 11:40:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4035056 00:32:04.116 11:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:04.116 11:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:04.116 11:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4035056' 00:32:04.116 killing process with pid 4035056 00:32:04.116 11:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 4035056 00:32:04.116 [2024-06-10 11:40:29.022099] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:04.116 11:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 4035056 00:32:04.375 11:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:04.375 11:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:04.375 11:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:04.375 11:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:04.375 11:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:04.375 11:40:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.375 11:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:04.375 11:40:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.279 11:40:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:06.279 11:40:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:32:06.279 00:32:06.279 real 0m24.557s 00:32:06.279 user 0m23.501s 00:32:06.279 sys 0m12.577s 00:32:06.279 11:40:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:06.279 11:40:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:32:06.279 ************************************ 00:32:06.279 END TEST nvmf_fips 00:32:06.279 ************************************ 00:32:06.279 11:40:31 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:32:06.279 11:40:31 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:32:06.279 11:40:31 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:32:06.279 11:40:31 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:32:06.279 11:40:31 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:32:06.279 11:40:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:14.397 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:14.397 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:14.397 Found net devices under 0000:af:00.0: cvl_0_0 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:14.397 Found net devices under 0000:af:00.1: cvl_0_1 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:32:14.397 11:40:39 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:32:14.397 11:40:39 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:14.397 11:40:39 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:14.397 11:40:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.397 ************************************ 00:32:14.397 START TEST nvmf_perf_adq 00:32:14.397 ************************************ 00:32:14.397 11:40:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:32:14.397 * Looking for test storage... 00:32:14.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:14.397 11:40:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.397 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:32:14.397 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.397 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.397 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.397 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.397 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.397 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.397 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.397 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:32:14.398 11:40:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:22.518 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:22.518 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:22.518 Found net devices under 0000:af:00.0: cvl_0_0 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:22.518 Found net devices under 0000:af:00.1: cvl_0_1 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:32:22.518 11:40:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:32:23.896 11:40:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:32:25.802 11:40:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:32:31.090 11:40:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:32:31.090 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:31.090 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.090 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:31.090 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:31.090 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:31.090 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.090 11:40:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:31.090 11:40:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.090 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:31.091 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:31.091 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:31.091 Found net devices under 0000:af:00.0: cvl_0_0 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:31.091 Found net devices under 0000:af:00.1: cvl_0_1 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:31.091 11:40:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:31.091 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:31.091 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:31.091 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:31.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:32:31.091 00:32:31.091 --- 10.0.0.2 ping statistics --- 00:32:31.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.091 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:32:31.091 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:31.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:32:31.091 00:32:31.092 --- 10.0.0.1 ping statistics --- 00:32:31.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.092 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4047026 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4047026 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 4047026 ']' 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:31.092 11:40:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:31.092 [2024-06-10 11:40:56.156658] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:32:31.092 [2024-06-10 11:40:56.156717] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.350 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.350 [2024-06-10 11:40:56.283423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:31.350 [2024-06-10 11:40:56.370817] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.350 [2024-06-10 11:40:56.370863] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.350 [2024-06-10 11:40:56.370876] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.350 [2024-06-10 11:40:56.370888] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.350 [2024-06-10 11:40:56.370898] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.350 [2024-06-10 11:40:56.370954] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.350 [2024-06-10 11:40:56.371047] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:32:31.350 [2024-06-10 11:40:56.371160] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.350 [2024-06-10 11:40:56.371160] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:32.282 [2024-06-10 11:40:57.267791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:32.282 Malloc1 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:32.282 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.283 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:32.283 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.283 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:32.283 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.283 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.283 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:32.283 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:32.283 [2024-06-10 11:40:57.315454] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.283 11:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:32.283 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=4047290 00:32:32.283 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:32:32.283 11:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:32.283 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.811 11:40:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:32:34.811 11:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:34.811 11:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:34.811 11:40:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:34.811 11:40:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:32:34.811 "tick_rate": 2500000000, 00:32:34.811 "poll_groups": [ 00:32:34.811 { 00:32:34.811 "name": "nvmf_tgt_poll_group_000", 00:32:34.811 "admin_qpairs": 1, 00:32:34.811 "io_qpairs": 1, 00:32:34.811 "current_admin_qpairs": 1, 00:32:34.811 "current_io_qpairs": 1, 00:32:34.811 "pending_bdev_io": 0, 00:32:34.811 "completed_nvme_io": 16445, 00:32:34.811 "transports": [ 00:32:34.811 { 00:32:34.811 "trtype": "TCP" 00:32:34.811 } 00:32:34.811 ] 00:32:34.811 }, 00:32:34.811 { 00:32:34.812 "name": "nvmf_tgt_poll_group_001", 00:32:34.812 "admin_qpairs": 0, 00:32:34.812 "io_qpairs": 1, 00:32:34.812 "current_admin_qpairs": 0, 00:32:34.812 "current_io_qpairs": 1, 00:32:34.812 "pending_bdev_io": 0, 00:32:34.812 "completed_nvme_io": 19804, 00:32:34.812 "transports": [ 00:32:34.812 { 00:32:34.812 "trtype": "TCP" 00:32:34.812 } 00:32:34.812 ] 00:32:34.812 }, 00:32:34.812 { 00:32:34.812 "name": "nvmf_tgt_poll_group_002", 00:32:34.812 "admin_qpairs": 0, 00:32:34.812 "io_qpairs": 1, 00:32:34.812 "current_admin_qpairs": 0, 00:32:34.812 "current_io_qpairs": 1, 00:32:34.812 "pending_bdev_io": 0, 00:32:34.812 "completed_nvme_io": 16655, 00:32:34.812 "transports": [ 00:32:34.812 { 00:32:34.812 "trtype": "TCP" 00:32:34.812 } 00:32:34.812 ] 00:32:34.812 }, 00:32:34.812 { 00:32:34.812 "name": "nvmf_tgt_poll_group_003", 00:32:34.812 "admin_qpairs": 0, 00:32:34.812 "io_qpairs": 1, 00:32:34.812 "current_admin_qpairs": 0, 00:32:34.812 "current_io_qpairs": 1, 00:32:34.812 "pending_bdev_io": 0, 00:32:34.812 "completed_nvme_io": 16091, 00:32:34.812 "transports": [ 00:32:34.812 { 00:32:34.812 "trtype": "TCP" 00:32:34.812 } 00:32:34.812 ] 00:32:34.812 } 00:32:34.812 ] 00:32:34.812 }' 00:32:34.812 11:40:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:32:34.812 11:40:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:32:34.812 11:40:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:32:34.812 11:40:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:32:34.812 11:40:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 4047290 00:32:42.920 Initializing NVMe Controllers 00:32:42.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:42.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:32:42.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:32:42.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:32:42.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:32:42.920 Initialization complete. Launching workers. 00:32:42.920 ======================================================== 00:32:42.920 Latency(us) 00:32:42.920 Device Information : IOPS MiB/s Average min max 00:32:42.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8595.90 33.58 7447.72 2808.30 12152.83 00:32:42.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10497.20 41.00 6097.11 1794.63 10702.20 00:32:42.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8844.20 34.55 7238.02 2324.40 11876.73 00:32:42.920 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8712.30 34.03 7347.31 2946.24 11923.36 00:32:42.920 ======================================================== 00:32:42.920 Total : 36649.60 143.16 6986.40 1794.63 12152.83 00:32:42.920 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:42.920 rmmod nvme_tcp 00:32:42.920 rmmod nvme_fabrics 00:32:42.920 rmmod nvme_keyring 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4047026 ']' 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4047026 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 4047026 ']' 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 4047026 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4047026 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4047026' 00:32:42.920 killing process with pid 4047026 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 4047026 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 4047026 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:42.920 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:42.921 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:42.921 11:41:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.921 11:41:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:42.921 11:41:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.825 11:41:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:44.825 11:41:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:32:44.825 11:41:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:32:46.201 11:41:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:32:48.737 11:41:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:54.014 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:54.014 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:54.015 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:54.015 Found net devices under 0000:af:00.0: cvl_0_0 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:54.015 Found net devices under 0000:af:00.1: cvl_0_1 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:54.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:54.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:32:54.015 00:32:54.015 --- 10.0.0.2 ping statistics --- 00:32:54.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:54.015 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:54.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:54.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:32:54.015 00:32:54.015 --- 10.0.0.1 ping statistics --- 00:32:54.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:54.015 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:32:54.015 net.core.busy_poll = 1 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:32:54.015 net.core.busy_read = 1 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:32:54.015 11:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:32:54.015 11:41:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:32:54.015 11:41:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:32:54.015 11:41:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:32:54.274 11:41:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:54.274 11:41:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:54.274 11:41:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:54.274 11:41:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:54.274 11:41:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4051149 00:32:54.275 11:41:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4051149 00:32:54.275 11:41:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:54.275 11:41:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 4051149 ']' 00:32:54.275 11:41:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.275 11:41:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:54.275 11:41:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.275 11:41:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:54.275 11:41:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:54.275 [2024-06-10 11:41:19.204478] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:32:54.275 [2024-06-10 11:41:19.204544] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:54.275 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.275 [2024-06-10 11:41:19.331640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:54.533 [2024-06-10 11:41:19.418540] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:54.533 [2024-06-10 11:41:19.418587] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:54.533 [2024-06-10 11:41:19.418601] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:54.533 [2024-06-10 11:41:19.418613] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:54.533 [2024-06-10 11:41:19.418623] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:54.533 [2024-06-10 11:41:19.418727] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.533 [2024-06-10 11:41:19.418827] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:32:54.533 [2024-06-10 11:41:19.418938] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:32:54.533 [2024-06-10 11:41:19.418938] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.099 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:55.099 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:32:55.099 11:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:55.099 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:55.099 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:55.099 11:41:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:55.099 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:32:55.099 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:32:55.099 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:32:55.099 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:55.099 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:55.099 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:55.358 [2024-06-10 11:41:20.324474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:55.358 Malloc1 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:55.358 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:55.359 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:55.359 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:55.359 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:55.359 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:55.359 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:55.359 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:55.359 [2024-06-10 11:41:20.372046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.359 11:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:55.359 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=4051440 00:32:55.359 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:32:55.359 11:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:55.359 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.889 11:41:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:32:57.889 11:41:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:57.889 11:41:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:57.889 11:41:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:57.889 11:41:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:32:57.889 "tick_rate": 2500000000, 00:32:57.889 "poll_groups": [ 00:32:57.889 { 00:32:57.889 "name": "nvmf_tgt_poll_group_000", 00:32:57.889 "admin_qpairs": 1, 00:32:57.889 "io_qpairs": 0, 00:32:57.889 "current_admin_qpairs": 1, 00:32:57.889 "current_io_qpairs": 0, 00:32:57.889 "pending_bdev_io": 0, 00:32:57.889 "completed_nvme_io": 0, 00:32:57.889 "transports": [ 00:32:57.889 { 00:32:57.889 "trtype": "TCP" 00:32:57.889 } 00:32:57.889 ] 00:32:57.889 }, 00:32:57.889 { 00:32:57.889 "name": "nvmf_tgt_poll_group_001", 00:32:57.889 "admin_qpairs": 0, 00:32:57.889 "io_qpairs": 4, 00:32:57.889 "current_admin_qpairs": 0, 00:32:57.889 "current_io_qpairs": 4, 00:32:57.889 "pending_bdev_io": 0, 00:32:57.889 "completed_nvme_io": 46391, 00:32:57.889 "transports": [ 00:32:57.889 { 00:32:57.889 "trtype": "TCP" 00:32:57.889 } 00:32:57.889 ] 00:32:57.889 }, 00:32:57.889 { 00:32:57.889 "name": "nvmf_tgt_poll_group_002", 00:32:57.889 "admin_qpairs": 0, 00:32:57.889 "io_qpairs": 0, 00:32:57.889 "current_admin_qpairs": 0, 00:32:57.889 "current_io_qpairs": 0, 00:32:57.889 "pending_bdev_io": 0, 00:32:57.889 "completed_nvme_io": 0, 00:32:57.889 "transports": [ 00:32:57.889 { 00:32:57.889 "trtype": "TCP" 00:32:57.889 } 00:32:57.889 ] 00:32:57.889 }, 00:32:57.889 { 00:32:57.889 "name": "nvmf_tgt_poll_group_003", 00:32:57.889 "admin_qpairs": 0, 00:32:57.889 "io_qpairs": 0, 00:32:57.889 "current_admin_qpairs": 0, 00:32:57.889 "current_io_qpairs": 0, 00:32:57.889 "pending_bdev_io": 0, 00:32:57.889 "completed_nvme_io": 0, 00:32:57.889 "transports": [ 00:32:57.889 { 00:32:57.889 "trtype": "TCP" 00:32:57.889 } 00:32:57.889 ] 00:32:57.889 } 00:32:57.889 ] 00:32:57.889 }' 00:32:57.889 11:41:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:32:57.889 11:41:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:32:57.889 11:41:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:32:57.889 11:41:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:32:57.889 11:41:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 4051440 00:33:06.031 Initializing NVMe Controllers 00:33:06.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:06.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:33:06.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:33:06.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:33:06.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:33:06.031 Initialization complete. Launching workers. 00:33:06.031 ======================================================== 00:33:06.031 Latency(us) 00:33:06.031 Device Information : IOPS MiB/s Average min max 00:33:06.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6586.50 25.73 9749.27 1781.90 55936.44 00:33:06.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5806.60 22.68 11025.77 1536.86 54955.90 00:33:06.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6357.50 24.83 10076.93 1297.73 56547.22 00:33:06.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5982.50 23.37 10734.12 1463.20 56461.67 00:33:06.031 ======================================================== 00:33:06.031 Total : 24733.09 96.61 10371.40 1297.73 56547.22 00:33:06.031 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:06.031 rmmod nvme_tcp 00:33:06.031 rmmod nvme_fabrics 00:33:06.031 rmmod nvme_keyring 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4051149 ']' 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4051149 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 4051149 ']' 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 4051149 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4051149 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4051149' 00:33:06.031 killing process with pid 4051149 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 4051149 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 4051149 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:06.031 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:06.032 11:41:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.032 11:41:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:06.032 11:41:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.939 11:41:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:07.940 11:41:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:07.940 00:33:07.940 real 0m53.847s 00:33:07.940 user 2m46.417s 00:33:07.940 sys 0m16.594s 00:33:07.940 11:41:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:07.940 11:41:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:33:07.940 ************************************ 00:33:07.940 END TEST nvmf_perf_adq 00:33:07.940 ************************************ 00:33:08.199 11:41:33 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:33:08.199 11:41:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:08.199 11:41:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:08.199 11:41:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:08.199 ************************************ 00:33:08.199 START TEST nvmf_shutdown 00:33:08.199 ************************************ 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:33:08.199 * Looking for test storage... 00:33:08.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:08.199 ************************************ 00:33:08.199 START TEST nvmf_shutdown_tc1 00:33:08.199 ************************************ 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:33:08.199 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:08.200 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.200 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:08.200 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:08.200 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:08.200 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.200 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:08.200 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.200 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:08.200 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:08.200 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:33:08.200 11:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:16.324 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:16.325 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:16.325 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:16.325 Found net devices under 0000:af:00.0: cvl_0_0 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:16.325 Found net devices under 0000:af:00.1: cvl_0_1 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.325 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:16.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:33:16.585 00:33:16.585 --- 10.0.0.2 ping statistics --- 00:33:16.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.585 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:33:16.585 00:33:16.585 --- 10.0.0.1 ping statistics --- 00:33:16.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.585 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=4057585 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 4057585 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 4057585 ']' 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:16.585 11:41:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:16.844 [2024-06-10 11:41:41.731832] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:33:16.844 [2024-06-10 11:41:41.731900] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.844 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.844 [2024-06-10 11:41:41.851732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:16.844 [2024-06-10 11:41:41.934050] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.844 [2024-06-10 11:41:41.934099] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.844 [2024-06-10 11:41:41.934113] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.844 [2024-06-10 11:41:41.934125] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.844 [2024-06-10 11:41:41.934135] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.844 [2024-06-10 11:41:41.934246] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.844 [2024-06-10 11:41:41.934340] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:33:16.844 [2024-06-10 11:41:41.934460] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.844 [2024-06-10 11:41:41.934461] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:17.779 [2024-06-10 11:41:42.699831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:17.779 11:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:17.779 Malloc1 00:33:17.779 [2024-06-10 11:41:42.815836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:17.779 Malloc2 00:33:17.779 Malloc3 00:33:18.038 Malloc4 00:33:18.038 Malloc5 00:33:18.038 Malloc6 00:33:18.038 Malloc7 00:33:18.038 Malloc8 00:33:18.038 Malloc9 00:33:18.296 Malloc10 00:33:18.296 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=4057909 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 4057909 /var/tmp/bdevperf.sock 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 4057909 ']' 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:18.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:18.297 { 00:33:18.297 "params": { 00:33:18.297 "name": "Nvme$subsystem", 00:33:18.297 "trtype": "$TEST_TRANSPORT", 00:33:18.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.297 "adrfam": "ipv4", 00:33:18.297 "trsvcid": "$NVMF_PORT", 00:33:18.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.297 "hdgst": ${hdgst:-false}, 00:33:18.297 "ddgst": ${ddgst:-false} 00:33:18.297 }, 00:33:18.297 "method": "bdev_nvme_attach_controller" 00:33:18.297 } 00:33:18.297 EOF 00:33:18.297 )") 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:18.297 { 00:33:18.297 "params": { 00:33:18.297 "name": "Nvme$subsystem", 00:33:18.297 "trtype": "$TEST_TRANSPORT", 00:33:18.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.297 "adrfam": "ipv4", 00:33:18.297 "trsvcid": "$NVMF_PORT", 00:33:18.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.297 "hdgst": ${hdgst:-false}, 00:33:18.297 "ddgst": ${ddgst:-false} 00:33:18.297 }, 00:33:18.297 "method": "bdev_nvme_attach_controller" 00:33:18.297 } 00:33:18.297 EOF 00:33:18.297 )") 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:18.297 { 00:33:18.297 "params": { 00:33:18.297 "name": "Nvme$subsystem", 00:33:18.297 "trtype": "$TEST_TRANSPORT", 00:33:18.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.297 "adrfam": "ipv4", 00:33:18.297 "trsvcid": "$NVMF_PORT", 00:33:18.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.297 "hdgst": ${hdgst:-false}, 00:33:18.297 "ddgst": ${ddgst:-false} 00:33:18.297 }, 00:33:18.297 "method": "bdev_nvme_attach_controller" 00:33:18.297 } 00:33:18.297 EOF 00:33:18.297 )") 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:18.297 { 00:33:18.297 "params": { 00:33:18.297 "name": "Nvme$subsystem", 00:33:18.297 "trtype": "$TEST_TRANSPORT", 00:33:18.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.297 "adrfam": "ipv4", 00:33:18.297 "trsvcid": "$NVMF_PORT", 00:33:18.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.297 "hdgst": ${hdgst:-false}, 00:33:18.297 "ddgst": ${ddgst:-false} 00:33:18.297 }, 00:33:18.297 "method": "bdev_nvme_attach_controller" 00:33:18.297 } 00:33:18.297 EOF 00:33:18.297 )") 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:18.297 { 00:33:18.297 "params": { 00:33:18.297 "name": "Nvme$subsystem", 00:33:18.297 "trtype": "$TEST_TRANSPORT", 00:33:18.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.297 "adrfam": "ipv4", 00:33:18.297 "trsvcid": "$NVMF_PORT", 00:33:18.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.297 "hdgst": ${hdgst:-false}, 00:33:18.297 "ddgst": ${ddgst:-false} 00:33:18.297 }, 00:33:18.297 "method": "bdev_nvme_attach_controller" 00:33:18.297 } 00:33:18.297 EOF 00:33:18.297 )") 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:18.297 { 00:33:18.297 "params": { 00:33:18.297 "name": "Nvme$subsystem", 00:33:18.297 "trtype": "$TEST_TRANSPORT", 00:33:18.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.297 "adrfam": "ipv4", 00:33:18.297 "trsvcid": "$NVMF_PORT", 00:33:18.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.297 "hdgst": ${hdgst:-false}, 00:33:18.297 "ddgst": ${ddgst:-false} 00:33:18.297 }, 00:33:18.297 "method": "bdev_nvme_attach_controller" 00:33:18.297 } 00:33:18.297 EOF 00:33:18.297 )") 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:18.297 [2024-06-10 11:41:43.303950] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:33:18.297 [2024-06-10 11:41:43.304015] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:18.297 { 00:33:18.297 "params": { 00:33:18.297 "name": "Nvme$subsystem", 00:33:18.297 "trtype": "$TEST_TRANSPORT", 00:33:18.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.297 "adrfam": "ipv4", 00:33:18.297 "trsvcid": "$NVMF_PORT", 00:33:18.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.297 "hdgst": ${hdgst:-false}, 00:33:18.297 "ddgst": ${ddgst:-false} 00:33:18.297 }, 00:33:18.297 "method": "bdev_nvme_attach_controller" 00:33:18.297 } 00:33:18.297 EOF 00:33:18.297 )") 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:18.297 { 00:33:18.297 "params": { 00:33:18.297 "name": "Nvme$subsystem", 00:33:18.297 "trtype": "$TEST_TRANSPORT", 00:33:18.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.297 "adrfam": "ipv4", 00:33:18.297 "trsvcid": "$NVMF_PORT", 00:33:18.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.297 "hdgst": ${hdgst:-false}, 00:33:18.297 "ddgst": ${ddgst:-false} 00:33:18.297 }, 00:33:18.297 "method": "bdev_nvme_attach_controller" 00:33:18.297 } 00:33:18.297 EOF 00:33:18.297 )") 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:18.297 { 00:33:18.297 "params": { 00:33:18.297 "name": "Nvme$subsystem", 00:33:18.297 "trtype": "$TEST_TRANSPORT", 00:33:18.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.297 "adrfam": "ipv4", 00:33:18.297 "trsvcid": "$NVMF_PORT", 00:33:18.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.297 "hdgst": ${hdgst:-false}, 00:33:18.297 "ddgst": ${ddgst:-false} 00:33:18.297 }, 00:33:18.297 "method": "bdev_nvme_attach_controller" 00:33:18.297 } 00:33:18.297 EOF 00:33:18.297 )") 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:18.297 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:18.298 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:18.298 { 00:33:18.298 "params": { 00:33:18.298 "name": "Nvme$subsystem", 00:33:18.298 "trtype": "$TEST_TRANSPORT", 00:33:18.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.298 "adrfam": "ipv4", 00:33:18.298 "trsvcid": "$NVMF_PORT", 00:33:18.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.298 "hdgst": ${hdgst:-false}, 00:33:18.298 "ddgst": ${ddgst:-false} 00:33:18.298 }, 00:33:18.298 "method": "bdev_nvme_attach_controller" 00:33:18.298 } 00:33:18.298 EOF 00:33:18.298 )") 00:33:18.298 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:18.298 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:33:18.298 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:33:18.298 11:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:18.298 "params": { 00:33:18.298 "name": "Nvme1", 00:33:18.298 "trtype": "tcp", 00:33:18.298 "traddr": "10.0.0.2", 00:33:18.298 "adrfam": "ipv4", 00:33:18.298 "trsvcid": "4420", 00:33:18.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:18.298 "hdgst": false, 00:33:18.298 "ddgst": false 00:33:18.298 }, 00:33:18.298 "method": "bdev_nvme_attach_controller" 00:33:18.298 },{ 00:33:18.298 "params": { 00:33:18.298 "name": "Nvme2", 00:33:18.298 "trtype": "tcp", 00:33:18.298 "traddr": "10.0.0.2", 00:33:18.298 "adrfam": "ipv4", 00:33:18.298 "trsvcid": "4420", 00:33:18.298 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:18.298 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:18.298 "hdgst": false, 00:33:18.298 "ddgst": false 00:33:18.298 }, 00:33:18.298 "method": "bdev_nvme_attach_controller" 00:33:18.298 },{ 00:33:18.298 "params": { 00:33:18.298 "name": "Nvme3", 00:33:18.298 "trtype": "tcp", 00:33:18.298 "traddr": "10.0.0.2", 00:33:18.298 "adrfam": "ipv4", 00:33:18.298 "trsvcid": "4420", 00:33:18.298 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:33:18.298 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:33:18.298 "hdgst": false, 00:33:18.298 "ddgst": false 00:33:18.298 }, 00:33:18.298 "method": "bdev_nvme_attach_controller" 00:33:18.298 },{ 00:33:18.298 "params": { 00:33:18.298 "name": "Nvme4", 00:33:18.298 "trtype": "tcp", 00:33:18.298 "traddr": "10.0.0.2", 00:33:18.298 "adrfam": "ipv4", 00:33:18.298 "trsvcid": "4420", 00:33:18.298 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:33:18.298 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:33:18.298 "hdgst": false, 00:33:18.298 "ddgst": false 00:33:18.298 }, 00:33:18.298 "method": "bdev_nvme_attach_controller" 00:33:18.298 },{ 00:33:18.298 "params": { 00:33:18.298 "name": "Nvme5", 00:33:18.298 "trtype": "tcp", 00:33:18.298 "traddr": "10.0.0.2", 00:33:18.298 "adrfam": "ipv4", 00:33:18.298 "trsvcid": "4420", 00:33:18.298 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:33:18.298 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:33:18.298 "hdgst": false, 00:33:18.298 "ddgst": false 00:33:18.298 }, 00:33:18.298 "method": "bdev_nvme_attach_controller" 00:33:18.298 },{ 00:33:18.298 "params": { 00:33:18.298 "name": "Nvme6", 00:33:18.298 "trtype": "tcp", 00:33:18.298 "traddr": "10.0.0.2", 00:33:18.298 "adrfam": "ipv4", 00:33:18.298 "trsvcid": "4420", 00:33:18.298 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:33:18.298 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:33:18.298 "hdgst": false, 00:33:18.298 "ddgst": false 00:33:18.298 }, 00:33:18.298 "method": "bdev_nvme_attach_controller" 00:33:18.298 },{ 00:33:18.298 "params": { 00:33:18.298 "name": "Nvme7", 00:33:18.298 "trtype": "tcp", 00:33:18.298 "traddr": "10.0.0.2", 00:33:18.298 "adrfam": "ipv4", 00:33:18.298 "trsvcid": "4420", 00:33:18.298 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:33:18.298 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:33:18.298 "hdgst": false, 00:33:18.298 "ddgst": false 00:33:18.298 }, 00:33:18.298 "method": "bdev_nvme_attach_controller" 00:33:18.298 },{ 00:33:18.298 "params": { 00:33:18.298 "name": "Nvme8", 00:33:18.298 "trtype": "tcp", 00:33:18.298 "traddr": "10.0.0.2", 00:33:18.298 "adrfam": "ipv4", 00:33:18.298 "trsvcid": "4420", 00:33:18.298 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:33:18.298 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:33:18.298 "hdgst": false, 00:33:18.298 "ddgst": false 00:33:18.298 }, 00:33:18.298 "method": "bdev_nvme_attach_controller" 00:33:18.298 },{ 00:33:18.298 "params": { 00:33:18.298 "name": "Nvme9", 00:33:18.298 "trtype": "tcp", 00:33:18.298 "traddr": "10.0.0.2", 00:33:18.298 "adrfam": "ipv4", 00:33:18.298 "trsvcid": "4420", 00:33:18.298 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:33:18.298 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:33:18.298 "hdgst": false, 00:33:18.298 "ddgst": false 00:33:18.298 }, 00:33:18.298 "method": "bdev_nvme_attach_controller" 00:33:18.298 },{ 00:33:18.298 "params": { 00:33:18.298 "name": "Nvme10", 00:33:18.298 "trtype": "tcp", 00:33:18.298 "traddr": "10.0.0.2", 00:33:18.298 "adrfam": "ipv4", 00:33:18.298 "trsvcid": "4420", 00:33:18.298 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:33:18.298 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:33:18.298 "hdgst": false, 00:33:18.298 "ddgst": false 00:33:18.298 }, 00:33:18.298 "method": "bdev_nvme_attach_controller" 00:33:18.298 }' 00:33:18.298 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.557 [2024-06-10 11:41:43.427001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.557 [2024-06-10 11:41:43.508971] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.931 11:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:19.931 11:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:33:19.931 11:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:19.931 11:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:19.931 11:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:19.931 11:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:19.931 11:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 4057909 00:33:19.931 11:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:33:19.931 11:41:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:33:20.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4057909 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 4057585 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:20.865 { 00:33:20.865 "params": { 00:33:20.865 "name": "Nvme$subsystem", 00:33:20.865 "trtype": "$TEST_TRANSPORT", 00:33:20.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.865 "adrfam": "ipv4", 00:33:20.865 "trsvcid": "$NVMF_PORT", 00:33:20.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.865 "hdgst": ${hdgst:-false}, 00:33:20.865 "ddgst": ${ddgst:-false} 00:33:20.865 }, 00:33:20.865 "method": "bdev_nvme_attach_controller" 00:33:20.865 } 00:33:20.865 EOF 00:33:20.865 )") 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:20.865 { 00:33:20.865 "params": { 00:33:20.865 "name": "Nvme$subsystem", 00:33:20.865 "trtype": "$TEST_TRANSPORT", 00:33:20.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.865 "adrfam": "ipv4", 00:33:20.865 "trsvcid": "$NVMF_PORT", 00:33:20.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.865 "hdgst": ${hdgst:-false}, 00:33:20.865 "ddgst": ${ddgst:-false} 00:33:20.865 }, 00:33:20.865 "method": "bdev_nvme_attach_controller" 00:33:20.865 } 00:33:20.865 EOF 00:33:20.865 )") 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:20.865 { 00:33:20.865 "params": { 00:33:20.865 "name": "Nvme$subsystem", 00:33:20.865 "trtype": "$TEST_TRANSPORT", 00:33:20.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.865 "adrfam": "ipv4", 00:33:20.865 "trsvcid": "$NVMF_PORT", 00:33:20.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.865 "hdgst": ${hdgst:-false}, 00:33:20.865 "ddgst": ${ddgst:-false} 00:33:20.865 }, 00:33:20.865 "method": "bdev_nvme_attach_controller" 00:33:20.865 } 00:33:20.865 EOF 00:33:20.865 )") 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:20.865 { 00:33:20.865 "params": { 00:33:20.865 "name": "Nvme$subsystem", 00:33:20.865 "trtype": "$TEST_TRANSPORT", 00:33:20.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.865 "adrfam": "ipv4", 00:33:20.865 "trsvcid": "$NVMF_PORT", 00:33:20.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.865 "hdgst": ${hdgst:-false}, 00:33:20.865 "ddgst": ${ddgst:-false} 00:33:20.865 }, 00:33:20.865 "method": "bdev_nvme_attach_controller" 00:33:20.865 } 00:33:20.865 EOF 00:33:20.865 )") 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:20.865 { 00:33:20.865 "params": { 00:33:20.865 "name": "Nvme$subsystem", 00:33:20.865 "trtype": "$TEST_TRANSPORT", 00:33:20.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.865 "adrfam": "ipv4", 00:33:20.865 "trsvcid": "$NVMF_PORT", 00:33:20.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.865 "hdgst": ${hdgst:-false}, 00:33:20.865 "ddgst": ${ddgst:-false} 00:33:20.865 }, 00:33:20.865 "method": "bdev_nvme_attach_controller" 00:33:20.865 } 00:33:20.865 EOF 00:33:20.865 )") 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:20.865 { 00:33:20.865 "params": { 00:33:20.865 "name": "Nvme$subsystem", 00:33:20.865 "trtype": "$TEST_TRANSPORT", 00:33:20.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.865 "adrfam": "ipv4", 00:33:20.865 "trsvcid": "$NVMF_PORT", 00:33:20.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.865 "hdgst": ${hdgst:-false}, 00:33:20.865 "ddgst": ${ddgst:-false} 00:33:20.865 }, 00:33:20.865 "method": "bdev_nvme_attach_controller" 00:33:20.865 } 00:33:20.865 EOF 00:33:20.865 )") 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:20.865 [2024-06-10 11:41:45.954786] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:33:20.865 [2024-06-10 11:41:45.954851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4058438 ] 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:20.865 { 00:33:20.865 "params": { 00:33:20.865 "name": "Nvme$subsystem", 00:33:20.865 "trtype": "$TEST_TRANSPORT", 00:33:20.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.865 "adrfam": "ipv4", 00:33:20.865 "trsvcid": "$NVMF_PORT", 00:33:20.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.865 "hdgst": ${hdgst:-false}, 00:33:20.865 "ddgst": ${ddgst:-false} 00:33:20.865 }, 00:33:20.865 "method": "bdev_nvme_attach_controller" 00:33:20.865 } 00:33:20.865 EOF 00:33:20.865 )") 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:20.865 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:20.865 { 00:33:20.865 "params": { 00:33:20.865 "name": "Nvme$subsystem", 00:33:20.865 "trtype": "$TEST_TRANSPORT", 00:33:20.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.865 "adrfam": "ipv4", 00:33:20.865 "trsvcid": "$NVMF_PORT", 00:33:20.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.865 "hdgst": ${hdgst:-false}, 00:33:20.865 "ddgst": ${ddgst:-false} 00:33:20.865 }, 00:33:20.865 "method": "bdev_nvme_attach_controller" 00:33:20.865 } 00:33:20.865 EOF 00:33:20.865 )") 00:33:21.123 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:21.123 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:21.123 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:21.123 { 00:33:21.123 "params": { 00:33:21.123 "name": "Nvme$subsystem", 00:33:21.123 "trtype": "$TEST_TRANSPORT", 00:33:21.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:21.123 "adrfam": "ipv4", 00:33:21.123 "trsvcid": "$NVMF_PORT", 00:33:21.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:21.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:21.123 "hdgst": ${hdgst:-false}, 00:33:21.123 "ddgst": ${ddgst:-false} 00:33:21.123 }, 00:33:21.123 "method": "bdev_nvme_attach_controller" 00:33:21.123 } 00:33:21.123 EOF 00:33:21.123 )") 00:33:21.123 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:21.123 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:21.123 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:21.123 { 00:33:21.123 "params": { 00:33:21.123 "name": "Nvme$subsystem", 00:33:21.123 "trtype": "$TEST_TRANSPORT", 00:33:21.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:21.123 "adrfam": "ipv4", 00:33:21.123 "trsvcid": "$NVMF_PORT", 00:33:21.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:21.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:21.123 "hdgst": ${hdgst:-false}, 00:33:21.123 "ddgst": ${ddgst:-false} 00:33:21.123 }, 00:33:21.123 "method": "bdev_nvme_attach_controller" 00:33:21.123 } 00:33:21.123 EOF 00:33:21.123 )") 00:33:21.123 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:33:21.123 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:33:21.123 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:33:21.123 11:41:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:21.123 "params": { 00:33:21.123 "name": "Nvme1", 00:33:21.123 "trtype": "tcp", 00:33:21.123 "traddr": "10.0.0.2", 00:33:21.123 "adrfam": "ipv4", 00:33:21.123 "trsvcid": "4420", 00:33:21.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:21.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:21.123 "hdgst": false, 00:33:21.123 "ddgst": false 00:33:21.123 }, 00:33:21.123 "method": "bdev_nvme_attach_controller" 00:33:21.123 },{ 00:33:21.123 "params": { 00:33:21.123 "name": "Nvme2", 00:33:21.123 "trtype": "tcp", 00:33:21.123 "traddr": "10.0.0.2", 00:33:21.123 "adrfam": "ipv4", 00:33:21.123 "trsvcid": "4420", 00:33:21.123 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:21.123 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:21.123 "hdgst": false, 00:33:21.123 "ddgst": false 00:33:21.123 }, 00:33:21.123 "method": "bdev_nvme_attach_controller" 00:33:21.123 },{ 00:33:21.123 "params": { 00:33:21.123 "name": "Nvme3", 00:33:21.123 "trtype": "tcp", 00:33:21.123 "traddr": "10.0.0.2", 00:33:21.123 "adrfam": "ipv4", 00:33:21.123 "trsvcid": "4420", 00:33:21.123 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:33:21.123 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:33:21.123 "hdgst": false, 00:33:21.123 "ddgst": false 00:33:21.123 }, 00:33:21.123 "method": "bdev_nvme_attach_controller" 00:33:21.123 },{ 00:33:21.123 "params": { 00:33:21.123 "name": "Nvme4", 00:33:21.123 "trtype": "tcp", 00:33:21.123 "traddr": "10.0.0.2", 00:33:21.123 "adrfam": "ipv4", 00:33:21.123 "trsvcid": "4420", 00:33:21.123 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:33:21.123 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:33:21.123 "hdgst": false, 00:33:21.123 "ddgst": false 00:33:21.123 }, 00:33:21.123 "method": "bdev_nvme_attach_controller" 00:33:21.123 },{ 00:33:21.123 "params": { 00:33:21.123 "name": "Nvme5", 00:33:21.123 "trtype": "tcp", 00:33:21.123 "traddr": "10.0.0.2", 00:33:21.123 "adrfam": "ipv4", 00:33:21.123 "trsvcid": "4420", 00:33:21.123 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:33:21.123 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:33:21.123 "hdgst": false, 00:33:21.123 "ddgst": false 00:33:21.123 }, 00:33:21.124 "method": "bdev_nvme_attach_controller" 00:33:21.124 },{ 00:33:21.124 "params": { 00:33:21.124 "name": "Nvme6", 00:33:21.124 "trtype": "tcp", 00:33:21.124 "traddr": "10.0.0.2", 00:33:21.124 "adrfam": "ipv4", 00:33:21.124 "trsvcid": "4420", 00:33:21.124 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:33:21.124 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:33:21.124 "hdgst": false, 00:33:21.124 "ddgst": false 00:33:21.124 }, 00:33:21.124 "method": "bdev_nvme_attach_controller" 00:33:21.124 },{ 00:33:21.124 "params": { 00:33:21.124 "name": "Nvme7", 00:33:21.124 "trtype": "tcp", 00:33:21.124 "traddr": "10.0.0.2", 00:33:21.124 "adrfam": "ipv4", 00:33:21.124 "trsvcid": "4420", 00:33:21.124 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:33:21.124 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:33:21.124 "hdgst": false, 00:33:21.124 "ddgst": false 00:33:21.124 }, 00:33:21.124 "method": "bdev_nvme_attach_controller" 00:33:21.124 },{ 00:33:21.124 "params": { 00:33:21.124 "name": "Nvme8", 00:33:21.124 "trtype": "tcp", 00:33:21.124 "traddr": "10.0.0.2", 00:33:21.124 "adrfam": "ipv4", 00:33:21.124 "trsvcid": "4420", 00:33:21.124 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:33:21.124 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:33:21.124 "hdgst": false, 00:33:21.124 "ddgst": false 00:33:21.124 }, 00:33:21.124 "method": "bdev_nvme_attach_controller" 00:33:21.124 },{ 00:33:21.124 "params": { 00:33:21.124 "name": "Nvme9", 00:33:21.124 "trtype": "tcp", 00:33:21.124 "traddr": "10.0.0.2", 00:33:21.124 "adrfam": "ipv4", 00:33:21.124 "trsvcid": "4420", 00:33:21.124 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:33:21.124 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:33:21.124 "hdgst": false, 00:33:21.124 "ddgst": false 00:33:21.124 }, 00:33:21.124 "method": "bdev_nvme_attach_controller" 00:33:21.124 },{ 00:33:21.124 "params": { 00:33:21.124 "name": "Nvme10", 00:33:21.124 "trtype": "tcp", 00:33:21.124 "traddr": "10.0.0.2", 00:33:21.124 "adrfam": "ipv4", 00:33:21.124 "trsvcid": "4420", 00:33:21.124 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:33:21.124 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:33:21.124 "hdgst": false, 00:33:21.124 "ddgst": false 00:33:21.124 }, 00:33:21.124 "method": "bdev_nvme_attach_controller" 00:33:21.124 }' 00:33:21.124 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.124 [2024-06-10 11:41:46.080386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.124 [2024-06-10 11:41:46.162460] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.496 Running I/O for 1 seconds... 00:33:23.870 00:33:23.870 Latency(us) 00:33:23.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.870 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:23.870 Verification LBA range: start 0x0 length 0x400 00:33:23.870 Nvme1n1 : 1.01 189.71 11.86 0.00 0.00 332950.19 45298.48 255013.68 00:33:23.870 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:23.870 Verification LBA range: start 0x0 length 0x400 00:33:23.870 Nvme2n1 : 1.17 218.43 13.65 0.00 0.00 284517.38 23068.67 275146.34 00:33:23.870 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:23.870 Verification LBA range: start 0x0 length 0x400 00:33:23.870 Nvme3n1 : 1.14 223.90 13.99 0.00 0.00 272306.59 20656.95 276824.06 00:33:23.870 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:23.870 Verification LBA range: start 0x0 length 0x400 00:33:23.870 Nvme4n1 : 1.14 225.17 14.07 0.00 0.00 264436.53 21181.24 273468.62 00:33:23.870 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:23.870 Verification LBA range: start 0x0 length 0x400 00:33:23.870 Nvme5n1 : 1.15 223.27 13.95 0.00 0.00 261865.47 41943.04 228170.14 00:33:23.870 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:23.870 Verification LBA range: start 0x0 length 0x400 00:33:23.870 Nvme6n1 : 1.18 217.35 13.58 0.00 0.00 264825.04 20027.80 281857.23 00:33:23.870 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:23.870 Verification LBA range: start 0x0 length 0x400 00:33:23.870 Nvme7n1 : 1.16 225.90 14.12 0.00 0.00 248968.55 4613.73 246625.08 00:33:23.870 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:23.870 Verification LBA range: start 0x0 length 0x400 00:33:23.870 Nvme8n1 : 1.16 220.39 13.77 0.00 0.00 250910.31 26319.26 276824.06 00:33:23.870 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:23.870 Verification LBA range: start 0x0 length 0x400 00:33:23.870 Nvme9n1 : 1.17 218.63 13.66 0.00 0.00 247801.24 22020.10 260046.85 00:33:23.870 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:23.870 Verification LBA range: start 0x0 length 0x400 00:33:23.870 Nvme10n1 : 1.20 225.88 14.12 0.00 0.00 235120.59 2306.87 307023.05 00:33:23.870 =================================================================================================================== 00:33:23.870 Total : 2188.62 136.79 0.00 0.00 264467.36 2306.87 307023.05 00:33:23.870 11:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:33:23.870 11:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:33:24.129 11:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:24.129 11:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:24.129 11:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:33:24.129 11:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:24.129 11:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:33:24.129 11:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:24.129 11:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:33:24.129 11:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:24.129 11:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:24.129 rmmod nvme_tcp 00:33:24.129 rmmod nvme_fabrics 00:33:24.129 rmmod nvme_keyring 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 4057585 ']' 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 4057585 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 4057585 ']' 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 4057585 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4057585 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4057585' 00:33:24.129 killing process with pid 4057585 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 4057585 00:33:24.129 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 4057585 00:33:24.697 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:24.697 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:24.697 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:24.697 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:24.697 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:24.697 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.697 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:24.697 11:41:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.604 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:26.604 00:33:26.604 real 0m18.347s 00:33:26.604 user 0m36.485s 00:33:26.604 sys 0m8.215s 00:33:26.604 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:26.604 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:26.604 ************************************ 00:33:26.604 END TEST nvmf_shutdown_tc1 00:33:26.604 ************************************ 00:33:26.604 11:41:51 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:33:26.604 11:41:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:26.604 11:41:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:26.604 11:41:51 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:26.864 ************************************ 00:33:26.864 START TEST nvmf_shutdown_tc2 00:33:26.864 ************************************ 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:33:26.864 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:26.865 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:26.865 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:26.865 Found net devices under 0000:af:00.0: cvl_0_0 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:26.865 Found net devices under 0000:af:00.1: cvl_0_1 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:26.865 11:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:27.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:27.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:33:27.125 00:33:27.125 --- 10.0.0.2 ping statistics --- 00:33:27.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.125 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:27.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:27.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:33:27.125 00:33:27.125 --- 10.0.0.1 ping statistics --- 00:33:27.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:27.125 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4059517 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4059517 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 4059517 ']' 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:27.125 11:41:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:27.125 [2024-06-10 11:41:52.152139] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:33:27.125 [2024-06-10 11:41:52.152202] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:27.125 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.383 [2024-06-10 11:41:52.269152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:27.384 [2024-06-10 11:41:52.356420] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:27.384 [2024-06-10 11:41:52.356462] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:27.384 [2024-06-10 11:41:52.356475] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:27.384 [2024-06-10 11:41:52.356488] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:27.384 [2024-06-10 11:41:52.356499] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:27.384 [2024-06-10 11:41:52.356612] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:33:27.384 [2024-06-10 11:41:52.356722] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:33:27.384 [2024-06-10 11:41:52.356829] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.384 [2024-06-10 11:41:52.356830] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.319 [2024-06-10 11:41:53.113911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.319 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.319 Malloc1 00:33:28.319 [2024-06-10 11:41:53.225974] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.319 Malloc2 00:33:28.319 Malloc3 00:33:28.319 Malloc4 00:33:28.319 Malloc5 00:33:28.319 Malloc6 00:33:28.578 Malloc7 00:33:28.578 Malloc8 00:33:28.578 Malloc9 00:33:28.578 Malloc10 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=4059819 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 4059819 /var/tmp/bdevperf.sock 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 4059819 ']' 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:28.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:28.578 { 00:33:28.578 "params": { 00:33:28.578 "name": "Nvme$subsystem", 00:33:28.578 "trtype": "$TEST_TRANSPORT", 00:33:28.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.578 "adrfam": "ipv4", 00:33:28.578 "trsvcid": "$NVMF_PORT", 00:33:28.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.578 "hdgst": ${hdgst:-false}, 00:33:28.578 "ddgst": ${ddgst:-false} 00:33:28.578 }, 00:33:28.578 "method": "bdev_nvme_attach_controller" 00:33:28.578 } 00:33:28.578 EOF 00:33:28.578 )") 00:33:28.578 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:28.837 { 00:33:28.837 "params": { 00:33:28.837 "name": "Nvme$subsystem", 00:33:28.837 "trtype": "$TEST_TRANSPORT", 00:33:28.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.837 "adrfam": "ipv4", 00:33:28.837 "trsvcid": "$NVMF_PORT", 00:33:28.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.837 "hdgst": ${hdgst:-false}, 00:33:28.837 "ddgst": ${ddgst:-false} 00:33:28.837 }, 00:33:28.837 "method": "bdev_nvme_attach_controller" 00:33:28.837 } 00:33:28.837 EOF 00:33:28.837 )") 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:28.837 { 00:33:28.837 "params": { 00:33:28.837 "name": "Nvme$subsystem", 00:33:28.837 "trtype": "$TEST_TRANSPORT", 00:33:28.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.837 "adrfam": "ipv4", 00:33:28.837 "trsvcid": "$NVMF_PORT", 00:33:28.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.837 "hdgst": ${hdgst:-false}, 00:33:28.837 "ddgst": ${ddgst:-false} 00:33:28.837 }, 00:33:28.837 "method": "bdev_nvme_attach_controller" 00:33:28.837 } 00:33:28.837 EOF 00:33:28.837 )") 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:28.837 { 00:33:28.837 "params": { 00:33:28.837 "name": "Nvme$subsystem", 00:33:28.837 "trtype": "$TEST_TRANSPORT", 00:33:28.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.837 "adrfam": "ipv4", 00:33:28.837 "trsvcid": "$NVMF_PORT", 00:33:28.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.837 "hdgst": ${hdgst:-false}, 00:33:28.837 "ddgst": ${ddgst:-false} 00:33:28.837 }, 00:33:28.837 "method": "bdev_nvme_attach_controller" 00:33:28.837 } 00:33:28.837 EOF 00:33:28.837 )") 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:28.837 { 00:33:28.837 "params": { 00:33:28.837 "name": "Nvme$subsystem", 00:33:28.837 "trtype": "$TEST_TRANSPORT", 00:33:28.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.837 "adrfam": "ipv4", 00:33:28.837 "trsvcid": "$NVMF_PORT", 00:33:28.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.837 "hdgst": ${hdgst:-false}, 00:33:28.837 "ddgst": ${ddgst:-false} 00:33:28.837 }, 00:33:28.837 "method": "bdev_nvme_attach_controller" 00:33:28.837 } 00:33:28.837 EOF 00:33:28.837 )") 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:28.837 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:28.837 { 00:33:28.837 "params": { 00:33:28.837 "name": "Nvme$subsystem", 00:33:28.837 "trtype": "$TEST_TRANSPORT", 00:33:28.837 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.837 "adrfam": "ipv4", 00:33:28.837 "trsvcid": "$NVMF_PORT", 00:33:28.837 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.837 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.837 "hdgst": ${hdgst:-false}, 00:33:28.837 "ddgst": ${ddgst:-false} 00:33:28.837 }, 00:33:28.837 "method": "bdev_nvme_attach_controller" 00:33:28.837 } 00:33:28.837 EOF 00:33:28.837 )") 00:33:28.837 [2024-06-10 11:41:53.717180] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:33:28.838 [2024-06-10 11:41:53.717232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059819 ] 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:28.838 { 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme$subsystem", 00:33:28.838 "trtype": "$TEST_TRANSPORT", 00:33:28.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "$NVMF_PORT", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.838 "hdgst": ${hdgst:-false}, 00:33:28.838 "ddgst": ${ddgst:-false} 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 } 00:33:28.838 EOF 00:33:28.838 )") 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:28.838 { 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme$subsystem", 00:33:28.838 "trtype": "$TEST_TRANSPORT", 00:33:28.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "$NVMF_PORT", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.838 "hdgst": ${hdgst:-false}, 00:33:28.838 "ddgst": ${ddgst:-false} 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 } 00:33:28.838 EOF 00:33:28.838 )") 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:28.838 { 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme$subsystem", 00:33:28.838 "trtype": "$TEST_TRANSPORT", 00:33:28.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "$NVMF_PORT", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.838 "hdgst": ${hdgst:-false}, 00:33:28.838 "ddgst": ${ddgst:-false} 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 } 00:33:28.838 EOF 00:33:28.838 )") 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:28.838 { 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme$subsystem", 00:33:28.838 "trtype": "$TEST_TRANSPORT", 00:33:28.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "$NVMF_PORT", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.838 "hdgst": ${hdgst:-false}, 00:33:28.838 "ddgst": ${ddgst:-false} 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 } 00:33:28.838 EOF 00:33:28.838 )") 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:33:28.838 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:33:28.838 11:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme1", 00:33:28.838 "trtype": "tcp", 00:33:28.838 "traddr": "10.0.0.2", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "4420", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:28.838 "hdgst": false, 00:33:28.838 "ddgst": false 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 },{ 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme2", 00:33:28.838 "trtype": "tcp", 00:33:28.838 "traddr": "10.0.0.2", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "4420", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:28.838 "hdgst": false, 00:33:28.838 "ddgst": false 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 },{ 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme3", 00:33:28.838 "trtype": "tcp", 00:33:28.838 "traddr": "10.0.0.2", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "4420", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:33:28.838 "hdgst": false, 00:33:28.838 "ddgst": false 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 },{ 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme4", 00:33:28.838 "trtype": "tcp", 00:33:28.838 "traddr": "10.0.0.2", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "4420", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:33:28.838 "hdgst": false, 00:33:28.838 "ddgst": false 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 },{ 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme5", 00:33:28.838 "trtype": "tcp", 00:33:28.838 "traddr": "10.0.0.2", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "4420", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:33:28.838 "hdgst": false, 00:33:28.838 "ddgst": false 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 },{ 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme6", 00:33:28.838 "trtype": "tcp", 00:33:28.838 "traddr": "10.0.0.2", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "4420", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:33:28.838 "hdgst": false, 00:33:28.838 "ddgst": false 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 },{ 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme7", 00:33:28.838 "trtype": "tcp", 00:33:28.838 "traddr": "10.0.0.2", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "4420", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:33:28.838 "hdgst": false, 00:33:28.838 "ddgst": false 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 },{ 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme8", 00:33:28.838 "trtype": "tcp", 00:33:28.838 "traddr": "10.0.0.2", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "4420", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:33:28.838 "hdgst": false, 00:33:28.838 "ddgst": false 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 },{ 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme9", 00:33:28.838 "trtype": "tcp", 00:33:28.838 "traddr": "10.0.0.2", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "4420", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:33:28.838 "hdgst": false, 00:33:28.838 "ddgst": false 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 },{ 00:33:28.838 "params": { 00:33:28.838 "name": "Nvme10", 00:33:28.838 "trtype": "tcp", 00:33:28.838 "traddr": "10.0.0.2", 00:33:28.838 "adrfam": "ipv4", 00:33:28.838 "trsvcid": "4420", 00:33:28.838 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:33:28.838 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:33:28.838 "hdgst": false, 00:33:28.838 "ddgst": false 00:33:28.838 }, 00:33:28.838 "method": "bdev_nvme_attach_controller" 00:33:28.838 }' 00:33:28.838 [2024-06-10 11:41:53.826376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.838 [2024-06-10 11:41:53.907515] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.213 Running I/O for 10 seconds... 00:33:30.213 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:30.213 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:33:30.213 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:30.213 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:30.213 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.471 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:30.471 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:33:30.472 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:33:30.730 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:33:30.730 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:33:30.730 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:33:30.731 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:33:30.731 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:30.731 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.989 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:30.989 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:33:30.989 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:33:30.989 11:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 4059819 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 4059819 ']' 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 4059819 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4059819 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4059819' 00:33:31.249 killing process with pid 4059819 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 4059819 00:33:31.249 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 4059819 00:33:31.249 Received shutdown signal, test time was about 1.003393 seconds 00:33:31.249 00:33:31.249 Latency(us) 00:33:31.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.249 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:31.249 Verification LBA range: start 0x0 length 0x400 00:33:31.249 Nvme1n1 : 0.99 258.35 16.15 0.00 0.00 244702.82 22124.95 273468.62 00:33:31.249 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:31.249 Verification LBA range: start 0x0 length 0x400 00:33:31.249 Nvme2n1 : 0.96 200.58 12.54 0.00 0.00 308139.90 22334.67 276824.06 00:33:31.249 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:31.249 Verification LBA range: start 0x0 length 0x400 00:33:31.249 Nvme3n1 : 1.00 255.59 15.97 0.00 0.00 236555.88 27892.12 265080.01 00:33:31.249 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:31.249 Verification LBA range: start 0x0 length 0x400 00:33:31.249 Nvme4n1 : 1.00 255.36 15.96 0.00 0.00 231603.20 19608.37 273468.62 00:33:31.249 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:31.249 Verification LBA range: start 0x0 length 0x400 00:33:31.249 Nvme5n1 : 0.99 194.37 12.15 0.00 0.00 297391.45 28730.98 283534.95 00:33:31.249 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:31.249 Verification LBA range: start 0x0 length 0x400 00:33:31.249 Nvme6n1 : 0.99 193.31 12.08 0.00 0.00 292356.10 25899.83 317089.38 00:33:31.249 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:31.249 Verification LBA range: start 0x0 length 0x400 00:33:31.249 Nvme7n1 : 1.00 192.69 12.04 0.00 0.00 286460.59 26109.54 286890.39 00:33:31.249 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:31.249 Verification LBA range: start 0x0 length 0x400 00:33:31.249 Nvme8n1 : 0.98 196.38 12.27 0.00 0.00 273081.96 26843.55 249980.52 00:33:31.249 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:31.249 Verification LBA range: start 0x0 length 0x400 00:33:31.249 Nvme9n1 : 0.96 200.20 12.51 0.00 0.00 259116.24 22020.10 266757.73 00:33:31.249 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:31.249 Verification LBA range: start 0x0 length 0x400 00:33:31.249 Nvme10n1 : 0.97 197.86 12.37 0.00 0.00 256836.68 20132.66 271790.90 00:33:31.249 =================================================================================================================== 00:33:31.249 Total : 2144.69 134.04 0.00 0.00 265805.95 19608.37 317089.38 00:33:31.508 11:41:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:33:32.443 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 4059517 00:33:32.443 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:33:32.443 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:33:32.443 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:32.443 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:32.443 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:33:32.443 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:32.443 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:33:32.443 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:32.443 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:33:32.443 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:32.702 rmmod nvme_tcp 00:33:32.702 rmmod nvme_fabrics 00:33:32.702 rmmod nvme_keyring 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 4059517 ']' 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 4059517 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 4059517 ']' 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 4059517 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4059517 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4059517' 00:33:32.702 killing process with pid 4059517 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 4059517 00:33:32.702 11:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 4059517 00:33:32.961 11:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:32.961 11:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:32.961 11:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:32.961 11:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:32.961 11:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:32.961 11:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.961 11:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:32.961 11:41:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:35.564 00:33:35.564 real 0m8.429s 00:33:35.564 user 0m25.410s 00:33:35.564 sys 0m1.764s 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:35.564 ************************************ 00:33:35.564 END TEST nvmf_shutdown_tc2 00:33:35.564 ************************************ 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:35.564 ************************************ 00:33:35.564 START TEST nvmf_shutdown_tc3 00:33:35.564 ************************************ 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:35.564 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:35.564 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:35.564 Found net devices under 0000:af:00.0: cvl_0_0 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:35.564 Found net devices under 0000:af:00.1: cvl_0_1 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:35.564 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:35.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:35.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:33:35.565 00:33:35.565 --- 10.0.0.2 ping statistics --- 00:33:35.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.565 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:35.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:35.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:33:35.565 00:33:35.565 --- 10.0.0.1 ping statistics --- 00:33:35.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.565 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=4061188 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 4061188 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 4061188 ']' 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:35.565 11:42:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:35.823 [2024-06-10 11:42:00.690130] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:33:35.823 [2024-06-10 11:42:00.690190] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:35.823 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.823 [2024-06-10 11:42:00.809129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:35.823 [2024-06-10 11:42:00.894654] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:35.823 [2024-06-10 11:42:00.894704] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:35.823 [2024-06-10 11:42:00.894717] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:35.823 [2024-06-10 11:42:00.894729] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:35.823 [2024-06-10 11:42:00.894739] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:35.823 [2024-06-10 11:42:00.894850] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:33:35.823 [2024-06-10 11:42:00.894963] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:33:35.823 [2024-06-10 11:42:00.895072] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:33:35.823 [2024-06-10 11:42:00.895072] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:36.760 [2024-06-10 11:42:01.637842] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:36.760 11:42:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:36.760 Malloc1 00:33:36.760 [2024-06-10 11:42:01.754107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.760 Malloc2 00:33:36.760 Malloc3 00:33:36.760 Malloc4 00:33:37.019 Malloc5 00:33:37.019 Malloc6 00:33:37.019 Malloc7 00:33:37.019 Malloc8 00:33:37.019 Malloc9 00:33:37.279 Malloc10 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=4061580 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 4061580 /var/tmp/bdevperf.sock 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 4061580 ']' 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:37.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:37.279 { 00:33:37.279 "params": { 00:33:37.279 "name": "Nvme$subsystem", 00:33:37.279 "trtype": "$TEST_TRANSPORT", 00:33:37.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.279 "adrfam": "ipv4", 00:33:37.279 "trsvcid": "$NVMF_PORT", 00:33:37.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.279 "hdgst": ${hdgst:-false}, 00:33:37.279 "ddgst": ${ddgst:-false} 00:33:37.279 }, 00:33:37.279 "method": "bdev_nvme_attach_controller" 00:33:37.279 } 00:33:37.279 EOF 00:33:37.279 )") 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:37.279 { 00:33:37.279 "params": { 00:33:37.279 "name": "Nvme$subsystem", 00:33:37.279 "trtype": "$TEST_TRANSPORT", 00:33:37.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.279 "adrfam": "ipv4", 00:33:37.279 "trsvcid": "$NVMF_PORT", 00:33:37.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.279 "hdgst": ${hdgst:-false}, 00:33:37.279 "ddgst": ${ddgst:-false} 00:33:37.279 }, 00:33:37.279 "method": "bdev_nvme_attach_controller" 00:33:37.279 } 00:33:37.279 EOF 00:33:37.279 )") 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:37.279 { 00:33:37.279 "params": { 00:33:37.279 "name": "Nvme$subsystem", 00:33:37.279 "trtype": "$TEST_TRANSPORT", 00:33:37.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.279 "adrfam": "ipv4", 00:33:37.279 "trsvcid": "$NVMF_PORT", 00:33:37.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.279 "hdgst": ${hdgst:-false}, 00:33:37.279 "ddgst": ${ddgst:-false} 00:33:37.279 }, 00:33:37.279 "method": "bdev_nvme_attach_controller" 00:33:37.279 } 00:33:37.279 EOF 00:33:37.279 )") 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:37.279 { 00:33:37.279 "params": { 00:33:37.279 "name": "Nvme$subsystem", 00:33:37.279 "trtype": "$TEST_TRANSPORT", 00:33:37.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.279 "adrfam": "ipv4", 00:33:37.279 "trsvcid": "$NVMF_PORT", 00:33:37.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.279 "hdgst": ${hdgst:-false}, 00:33:37.279 "ddgst": ${ddgst:-false} 00:33:37.279 }, 00:33:37.279 "method": "bdev_nvme_attach_controller" 00:33:37.279 } 00:33:37.279 EOF 00:33:37.279 )") 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:37.279 { 00:33:37.279 "params": { 00:33:37.279 "name": "Nvme$subsystem", 00:33:37.279 "trtype": "$TEST_TRANSPORT", 00:33:37.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.279 "adrfam": "ipv4", 00:33:37.279 "trsvcid": "$NVMF_PORT", 00:33:37.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.279 "hdgst": ${hdgst:-false}, 00:33:37.279 "ddgst": ${ddgst:-false} 00:33:37.279 }, 00:33:37.279 "method": "bdev_nvme_attach_controller" 00:33:37.279 } 00:33:37.279 EOF 00:33:37.279 )") 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:37.279 { 00:33:37.279 "params": { 00:33:37.279 "name": "Nvme$subsystem", 00:33:37.279 "trtype": "$TEST_TRANSPORT", 00:33:37.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.279 "adrfam": "ipv4", 00:33:37.279 "trsvcid": "$NVMF_PORT", 00:33:37.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.279 "hdgst": ${hdgst:-false}, 00:33:37.279 "ddgst": ${ddgst:-false} 00:33:37.279 }, 00:33:37.279 "method": "bdev_nvme_attach_controller" 00:33:37.279 } 00:33:37.279 EOF 00:33:37.279 )") 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:33:37.279 [2024-06-10 11:42:02.248439] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:33:37.279 [2024-06-10 11:42:02.248501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061580 ] 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:37.279 { 00:33:37.279 "params": { 00:33:37.279 "name": "Nvme$subsystem", 00:33:37.279 "trtype": "$TEST_TRANSPORT", 00:33:37.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.279 "adrfam": "ipv4", 00:33:37.279 "trsvcid": "$NVMF_PORT", 00:33:37.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.279 "hdgst": ${hdgst:-false}, 00:33:37.279 "ddgst": ${ddgst:-false} 00:33:37.279 }, 00:33:37.279 "method": "bdev_nvme_attach_controller" 00:33:37.279 } 00:33:37.279 EOF 00:33:37.279 )") 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:37.279 { 00:33:37.279 "params": { 00:33:37.279 "name": "Nvme$subsystem", 00:33:37.279 "trtype": "$TEST_TRANSPORT", 00:33:37.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.279 "adrfam": "ipv4", 00:33:37.279 "trsvcid": "$NVMF_PORT", 00:33:37.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.279 "hdgst": ${hdgst:-false}, 00:33:37.279 "ddgst": ${ddgst:-false} 00:33:37.279 }, 00:33:37.279 "method": "bdev_nvme_attach_controller" 00:33:37.279 } 00:33:37.279 EOF 00:33:37.279 )") 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:33:37.279 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:37.280 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:37.280 { 00:33:37.280 "params": { 00:33:37.280 "name": "Nvme$subsystem", 00:33:37.280 "trtype": "$TEST_TRANSPORT", 00:33:37.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.280 "adrfam": "ipv4", 00:33:37.280 "trsvcid": "$NVMF_PORT", 00:33:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.280 "hdgst": ${hdgst:-false}, 00:33:37.280 "ddgst": ${ddgst:-false} 00:33:37.280 }, 00:33:37.280 "method": "bdev_nvme_attach_controller" 00:33:37.280 } 00:33:37.280 EOF 00:33:37.280 )") 00:33:37.280 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:33:37.280 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:37.280 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:37.280 { 00:33:37.280 "params": { 00:33:37.280 "name": "Nvme$subsystem", 00:33:37.280 "trtype": "$TEST_TRANSPORT", 00:33:37.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:37.280 "adrfam": "ipv4", 00:33:37.280 "trsvcid": "$NVMF_PORT", 00:33:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:37.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:37.280 "hdgst": ${hdgst:-false}, 00:33:37.280 "ddgst": ${ddgst:-false} 00:33:37.280 }, 00:33:37.280 "method": "bdev_nvme_attach_controller" 00:33:37.280 } 00:33:37.280 EOF 00:33:37.280 )") 00:33:37.280 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:33:37.280 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:33:37.280 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:33:37.280 11:42:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:37.280 "params": { 00:33:37.280 "name": "Nvme1", 00:33:37.280 "trtype": "tcp", 00:33:37.280 "traddr": "10.0.0.2", 00:33:37.280 "adrfam": "ipv4", 00:33:37.280 "trsvcid": "4420", 00:33:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:37.280 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:37.280 "hdgst": false, 00:33:37.280 "ddgst": false 00:33:37.280 }, 00:33:37.280 "method": "bdev_nvme_attach_controller" 00:33:37.280 },{ 00:33:37.280 "params": { 00:33:37.280 "name": "Nvme2", 00:33:37.280 "trtype": "tcp", 00:33:37.280 "traddr": "10.0.0.2", 00:33:37.280 "adrfam": "ipv4", 00:33:37.280 "trsvcid": "4420", 00:33:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:37.280 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:37.280 "hdgst": false, 00:33:37.280 "ddgst": false 00:33:37.280 }, 00:33:37.280 "method": "bdev_nvme_attach_controller" 00:33:37.280 },{ 00:33:37.280 "params": { 00:33:37.280 "name": "Nvme3", 00:33:37.280 "trtype": "tcp", 00:33:37.280 "traddr": "10.0.0.2", 00:33:37.280 "adrfam": "ipv4", 00:33:37.280 "trsvcid": "4420", 00:33:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:33:37.280 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:33:37.280 "hdgst": false, 00:33:37.280 "ddgst": false 00:33:37.280 }, 00:33:37.280 "method": "bdev_nvme_attach_controller" 00:33:37.280 },{ 00:33:37.280 "params": { 00:33:37.280 "name": "Nvme4", 00:33:37.280 "trtype": "tcp", 00:33:37.280 "traddr": "10.0.0.2", 00:33:37.280 "adrfam": "ipv4", 00:33:37.280 "trsvcid": "4420", 00:33:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:33:37.280 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:33:37.280 "hdgst": false, 00:33:37.280 "ddgst": false 00:33:37.280 }, 00:33:37.280 "method": "bdev_nvme_attach_controller" 00:33:37.280 },{ 00:33:37.280 "params": { 00:33:37.280 "name": "Nvme5", 00:33:37.280 "trtype": "tcp", 00:33:37.280 "traddr": "10.0.0.2", 00:33:37.280 "adrfam": "ipv4", 00:33:37.280 "trsvcid": "4420", 00:33:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:33:37.280 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:33:37.280 "hdgst": false, 00:33:37.280 "ddgst": false 00:33:37.280 }, 00:33:37.280 "method": "bdev_nvme_attach_controller" 00:33:37.280 },{ 00:33:37.280 "params": { 00:33:37.280 "name": "Nvme6", 00:33:37.280 "trtype": "tcp", 00:33:37.280 "traddr": "10.0.0.2", 00:33:37.280 "adrfam": "ipv4", 00:33:37.280 "trsvcid": "4420", 00:33:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:33:37.280 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:33:37.280 "hdgst": false, 00:33:37.280 "ddgst": false 00:33:37.280 }, 00:33:37.280 "method": "bdev_nvme_attach_controller" 00:33:37.280 },{ 00:33:37.280 "params": { 00:33:37.280 "name": "Nvme7", 00:33:37.280 "trtype": "tcp", 00:33:37.280 "traddr": "10.0.0.2", 00:33:37.280 "adrfam": "ipv4", 00:33:37.280 "trsvcid": "4420", 00:33:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:33:37.280 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:33:37.280 "hdgst": false, 00:33:37.280 "ddgst": false 00:33:37.280 }, 00:33:37.280 "method": "bdev_nvme_attach_controller" 00:33:37.280 },{ 00:33:37.280 "params": { 00:33:37.280 "name": "Nvme8", 00:33:37.280 "trtype": "tcp", 00:33:37.280 "traddr": "10.0.0.2", 00:33:37.280 "adrfam": "ipv4", 00:33:37.280 "trsvcid": "4420", 00:33:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:33:37.280 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:33:37.280 "hdgst": false, 00:33:37.280 "ddgst": false 00:33:37.280 }, 00:33:37.280 "method": "bdev_nvme_attach_controller" 00:33:37.280 },{ 00:33:37.280 "params": { 00:33:37.280 "name": "Nvme9", 00:33:37.280 "trtype": "tcp", 00:33:37.280 "traddr": "10.0.0.2", 00:33:37.280 "adrfam": "ipv4", 00:33:37.280 "trsvcid": "4420", 00:33:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:33:37.280 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:33:37.280 "hdgst": false, 00:33:37.280 "ddgst": false 00:33:37.280 }, 00:33:37.280 "method": "bdev_nvme_attach_controller" 00:33:37.280 },{ 00:33:37.280 "params": { 00:33:37.280 "name": "Nvme10", 00:33:37.280 "trtype": "tcp", 00:33:37.280 "traddr": "10.0.0.2", 00:33:37.280 "adrfam": "ipv4", 00:33:37.280 "trsvcid": "4420", 00:33:37.280 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:33:37.280 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:33:37.280 "hdgst": false, 00:33:37.280 "ddgst": false 00:33:37.280 }, 00:33:37.280 "method": "bdev_nvme_attach_controller" 00:33:37.280 }' 00:33:37.280 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.280 [2024-06-10 11:42:02.372710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.539 [2024-06-10 11:42:02.459302] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.914 Running I/O for 10 seconds... 00:33:38.914 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:38.914 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:33:38.914 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:38.914 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:38.914 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:33:39.173 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:33:39.431 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:33:39.431 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:33:39.431 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:33:39.431 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.431 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:39.432 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:33:39.690 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.690 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:33:39.690 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:33:39.690 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:33:39.964 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:33:39.964 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:33:39.964 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:33:39.964 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:33:39.964 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:39.964 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:39.964 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:39.964 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 4061188 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 4061188 ']' 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 4061188 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4061188 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4061188' 00:33:39.965 killing process with pid 4061188 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 4061188 00:33:39.965 11:42:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 4061188 00:33:39.965 [2024-06-10 11:42:04.938883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.938935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.938968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.938988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.939972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.939993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.940011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.940033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.940051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.940071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.940089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.940110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.940128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.965 [2024-06-10 11:42:04.940148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.965 [2024-06-10 11:42:04.940166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.940961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.940982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.941001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.941020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.941039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.941059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.941078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.941098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.941115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.941136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.941154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.941175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.941197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.941216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.941235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.941255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.941273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.941294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.941311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.941334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.941353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.941374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.941394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.941413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.941433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.941452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.966 [2024-06-10 11:42:04.941471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.966 [2024-06-10 11:42:04.941569] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1483960 was disconnected and freed. reset controller. 00:33:39.966 [2024-06-10 11:42:04.944584] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.966 [2024-06-10 11:42:04.944660] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.966 [2024-06-10 11:42:04.944674] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.966 [2024-06-10 11:42:04.944687] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.966 [2024-06-10 11:42:04.944699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.966 [2024-06-10 11:42:04.944712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.966 [2024-06-10 11:42:04.944723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.966 [2024-06-10 11:42:04.944735] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.966 [2024-06-10 11:42:04.944747] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.966 [2024-06-10 11:42:04.944759] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.966 [2024-06-10 11:42:04.944774] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944787] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944811] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944834] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944845] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944869] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944892] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944916] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944939] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944952] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944964] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944976] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.944987] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945000] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945012] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945024] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945036] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945049] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945062] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945074] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945099] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945112] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945124] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945136] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945160] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945171] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945183] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945195] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945218] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945230] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945242] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945254] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945265] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945277] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945289] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945301] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a40 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.945449] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.967 [2024-06-10 11:42:04.945529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5820 (9): Bad file descriptor 00:33:39.967 [2024-06-10 11:42:04.946433] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946451] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946463] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946476] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946487] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946499] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946526] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946538] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946550] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946563] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946581] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946593] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946605] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946617] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946629] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946641] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946653] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946664] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946688] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946711] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946722] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946734] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946746] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946757] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946769] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.967 [2024-06-10 11:42:04.946780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946792] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946815] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946827] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946852] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946887] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946901] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946925] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946937] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946984] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.946996] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947019] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947055] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947090] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947113] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947125] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947137] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947160] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947174] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947187] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d340 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.968 [2024-06-10 11:42:04.947735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d5820 with addr=10.0.0.2, port=4420 00:33:39.968 [2024-06-10 11:42:04.947756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d5820 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.947884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.968 [2024-06-10 11:42:04.947908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.968 [2024-06-10 11:42:04.947928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.968 [2024-06-10 11:42:04.947945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.968 [2024-06-10 11:42:04.947965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.968 [2024-06-10 11:42:04.947983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.968 [2024-06-10 11:42:04.948004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.968 [2024-06-10 11:42:04.948022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.968 [2024-06-10 11:42:04.948039] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b6fa0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.948983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5820 (9): Bad file descriptor 00:33:39.968 [2024-06-10 11:42:04.949001] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949037] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949051] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949063] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949076] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949088] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949100] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949112] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949124] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949135] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949147] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949160] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949176] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949188] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949200] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949212] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949224] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949236] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949248] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949260] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949272] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949283] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949295] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949307] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949319] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949331] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949342] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949354] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949365] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949377] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949389] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949401] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.968 [2024-06-10 11:42:04.949412] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949424] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949436] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949448] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949460] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949472] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949483] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949498] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949510] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949522] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949533] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949545] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949557] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949569] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949586] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949598] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949610] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949622] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949633] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949645] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949657] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949669] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949681] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949692] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949705] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949717] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949728] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949728] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error [2024-06-10 11:42:04.949740] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with state 00:33:39.969 the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949755] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8d7e0 is same with the state(5) to be set 00:33:39.969 [2024-06-10 11:42:04.949760] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.969 [2024-06-10 11:42:04.949781] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.969 [2024-06-10 11:42:04.950506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.950538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.950566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.950601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.950622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.950642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.950662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.950681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.950702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.950720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.950741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.950758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.950779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.950798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.950819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.950837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.950857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.950875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.950900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.950918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.950940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.950959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.950979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.950998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.951018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.951036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.951057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.951076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.951101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.951120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.951143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.951162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.969 [2024-06-10 11:42:04.951183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.969 [2024-06-10 11:42:04.951201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 11:42:04.951796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951826] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951838] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951848] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951867] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951878] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951888] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951897] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951906] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951916] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with [2024-06-10 11:42:04.951911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:12the state(5) to be set 00:33:39.970 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.951933] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951942] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with [2024-06-10 11:42:04.951939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:33:39.970 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.951953] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:12[2024-06-10 11:42:04.951968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951979] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 11:42:04.951988] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.951999] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952009] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.952019] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952028] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.952037] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952048] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.952058] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952069] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.952078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952087] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:12[2024-06-10 11:42:04.952096] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952113] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.952128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952137] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.970 [2024-06-10 11:42:04.952146] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952156] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.970 [2024-06-10 11:42:04.952166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.970 [2024-06-10 11:42:04.952176] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:12[2024-06-10 11:42:04.952185] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952196] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 11:42:04.952205] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952217] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:12[2024-06-10 11:42:04.952227] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952239] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 11:42:04.952248] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952260] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:12[2024-06-10 11:42:04.952272] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952283] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 11:42:04.952292] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952304] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:12[2024-06-10 11:42:04.952315] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952338] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952349] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952367] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952375] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952384] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952394] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:12[2024-06-10 11:42:04.952402] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952414] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 11:42:04.952423] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952433] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952442] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952452] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952461] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e140 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.952460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.952964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.952982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.953005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.953024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.953043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.953062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.953083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.953101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.953122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.971 [2024-06-10 11:42:04.953139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.971 [2024-06-10 11:42:04.953159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147c2c0 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.953452] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.971 [2024-06-10 11:42:04.953470] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953482] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953494] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953506] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953518] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953529] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953541] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953553] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953565] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953582] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953595] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953607] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953618] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953630] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953635] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x147c2c0 was disconnected and fr[2024-06-10 11:42:04.953642] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with eed. reset controller. 00:33:39.972 the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953661] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953674] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with [2024-06-10 11:42:04.953672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.972 the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953690] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953702] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953714] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953726] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953737] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953749] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953761] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953772] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953784] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953807] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953831] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953855] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953867] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953879] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953891] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953903] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953915] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953926] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.972 [2024-06-10 11:42:04.953945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953958] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.972 [2024-06-10 11:42:04.953973] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.953989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.972 [2024-06-10 11:42:04.953997] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954009] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.972 [2024-06-10 11:42:04.954021] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954033] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with [2024-06-10 11:42:04.954032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1the state(5) to be set 00:33:39.972 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.972 [2024-06-10 11:42:04.954048] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.972 [2024-06-10 11:42:04.954060] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954072] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.972 [2024-06-10 11:42:04.954084] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954096] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with [2024-06-10 11:42:04.954094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:33:39.972 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.972 [2024-06-10 11:42:04.954109] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1[2024-06-10 11:42:04.954122] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.972 the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954135] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.972 [2024-06-10 11:42:04.954147] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954159] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.972 [2024-06-10 11:42:04.954171] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 11:42:04.954185] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.972 the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954198] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.972 [2024-06-10 11:42:04.954210] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954222] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.972 [2024-06-10 11:42:04.954234] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e5e0 is same with the state(5) to be set 00:33:39.972 [2024-06-10 11:42:04.954244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.972 [2024-06-10 11:42:04.954266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.972 [2024-06-10 11:42:04.954287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.972 [2024-06-10 11:42:04.954308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.972 [2024-06-10 11:42:04.954329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.972 [2024-06-10 11:42:04.954346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.954965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.954984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955420] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955440] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955453] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955465] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:12[2024-06-10 11:42:04.955477] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955496] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955508] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955520] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955532] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955544] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955556] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955568] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with [2024-06-10 11:42:04.955566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:12the state(5) to be set 00:33:39.973 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 [2024-06-10 11:42:04.955587] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.973 [2024-06-10 11:42:04.955599] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:12[2024-06-10 11:42:04.955618] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.973 the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955632] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.973 [2024-06-10 11:42:04.955634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.974 [2024-06-10 11:42:04.955644] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955656] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.974 [2024-06-10 11:42:04.955667] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 11:42:04.955679] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.974 the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955694] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:12[2024-06-10 11:42:04.955705] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.974 the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955720] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.974 [2024-06-10 11:42:04.955732] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955744] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.974 [2024-06-10 11:42:04.955756] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 11:42:04.955767] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.974 the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955781] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.974 [2024-06-10 11:42:04.955793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955806] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with [2024-06-10 11:42:04.955805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:33:39.974 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.974 [2024-06-10 11:42:04.955820] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:12[2024-06-10 11:42:04.955832] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.974 the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955846] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.974 [2024-06-10 11:42:04.955858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955870] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.974 [2024-06-10 11:42:04.955882] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.974 [2024-06-10 11:42:04.955894] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.974 [2024-06-10 11:42:04.955922] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-10 11:42:04.955935] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.974 the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955948] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.974 [2024-06-10 11:42:04.955960] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.974 [2024-06-10 11:42:04.955984] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955996] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.955994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.974 [2024-06-10 11:42:04.956008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.974 [2024-06-10 11:42:04.956022] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956034] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.974 [2024-06-10 11:42:04.956046] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956058] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.974 [2024-06-10 11:42:04.956070] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:12[2024-06-10 11:42:04.956083] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.974 the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.974 [2024-06-10 11:42:04.956109] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.974 [2024-06-10 11:42:04.956135] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.974 [2024-06-10 11:42:04.956147] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956160] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.974 [2024-06-10 11:42:04.956171] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956183] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.974 [2024-06-10 11:42:04.956183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.975 [2024-06-10 11:42:04.956195] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.956207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with [2024-06-10 11:42:04.956206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:12the state(5) to be set 00:33:39.975 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.975 [2024-06-10 11:42:04.956221] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ea80 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.956228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.975 [2024-06-10 11:42:04.956249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.975 [2024-06-10 11:42:04.956267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.975 [2024-06-10 11:42:04.956287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.975 [2024-06-10 11:42:04.956306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.975 [2024-06-10 11:42:04.956326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.975 [2024-06-10 11:42:04.956344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.975 [2024-06-10 11:42:04.956364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.975 [2024-06-10 11:42:04.956383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.975 [2024-06-10 11:42:04.956403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.975 [2024-06-10 11:42:04.956422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.975 [2024-06-10 11:42:04.956443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.975 [2024-06-10 11:42:04.956465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.975 [2024-06-10 11:42:04.956485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.975 [2024-06-10 11:42:04.956503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.975 [2024-06-10 11:42:04.956523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.975 [2024-06-10 11:42:04.956541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.975 [2024-06-10 11:42:04.956638] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1538dc0 was disconnected and freed. reset controller. 00:33:39.975 [2024-06-10 11:42:04.957231] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957265] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957279] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957291] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957303] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957316] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957329] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957364] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957376] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957388] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957400] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957412] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957424] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957436] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957448] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957460] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957471] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957483] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957495] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957523] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957535] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957546] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957558] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957570] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957589] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957601] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957613] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957624] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957636] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957649] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957660] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957672] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957685] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957700] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957735] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957747] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957759] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957771] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957783] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957795] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957807] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957818] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957844] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957868] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957880] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.975 [2024-06-10 11:42:04.957893] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.957904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.957916] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.957928] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.957939] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.957951] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.957963] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.957975] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.957986] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.957998] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958010] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8ef40 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958453] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:33:39.976 [2024-06-10 11:42:04.958497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b6fa0 (9): Bad file descriptor 00:33:39.976 [2024-06-10 11:42:04.958558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.976 [2024-06-10 11:42:04.958588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.976 [2024-06-10 11:42:04.958609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.976 [2024-06-10 11:42:04.958627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.976 [2024-06-10 11:42:04.958647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.976 [2024-06-10 11:42:04.958665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.976 [2024-06-10 11:42:04.958685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.976 [2024-06-10 11:42:04.958699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.976 [2024-06-10 11:42:04.958715] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1492d80 is same [2024-06-10 11:42:04.958728] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with with the state(5) to be set 00:33:39.976 the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958739] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958748] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958756] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958765] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958773] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.976 [2024-06-10 11:42:04.958783] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958792] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-06-10 11:42:04.958800] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.976 the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958811] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with [2024-06-10 11:42:04.958816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:33:39.976 id:0 cdw10:00000000 cdw11:00000000 00:33:39.976 [2024-06-10 11:42:04.958830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.976 [2024-06-10 11:42:04.958848] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.976 [2024-06-10 11:42:04.958866] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-06-10 11:42:04.958883] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.976 the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958894] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958902] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with [2024-06-10 11:42:04.958898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:33:39.976 id:0 cdw10:00000000 cdw11:00000000 00:33:39.976 [2024-06-10 11:42:04.958914] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958923] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with [2024-06-10 11:42:04.958920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:33:39.976 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.976 [2024-06-10 11:42:04.958934] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958943] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with [2024-06-10 11:42:04.958940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159f1b0 is same the state(5) to be set 00:33:39.976 with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958953] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958962] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958970] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958988] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.958997] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.959005] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.959002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.976 [2024-06-10 11:42:04.959014] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.959027] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.959024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.976 [2024-06-10 11:42:04.959036] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.959044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.959046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.976 [2024-06-10 11:42:04.959053] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.959063] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.959065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.976 [2024-06-10 11:42:04.959071] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.959081] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.959086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-06-10 11:42:04.959091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with id:0 cdw10:00000000 cdw11:00000000 00:33:39.976 the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.959102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.959106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-06-10 11:42:04.959111] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.976 the state(5) to be set 00:33:39.976 [2024-06-10 11:42:04.959121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959130] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959139] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959156] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1400d10 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959175] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959185] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959193] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158fd70 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d36b0 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159d8a0 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.959802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.977 [2024-06-10 11:42:04.959941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.959958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9150 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.960061] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:33:39.977 [2024-06-10 11:42:04.961968] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:33:39.977 [2024-06-10 11:42:04.962010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158fd70 (9): Bad file descriptor 00:33:39.977 [2024-06-10 11:42:04.962795] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.977 [2024-06-10 11:42:04.963031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-06-10 11:42:04.963060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b6fa0 with addr=10.0.0.2, port=4420 00:33:39.977 [2024-06-10 11:42:04.963081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b6fa0 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.964045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-06-10 11:42:04.964079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x158fd70 with addr=10.0.0.2, port=4420 00:33:39.977 [2024-06-10 11:42:04.964099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158fd70 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.964362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.977 [2024-06-10 11:42:04.964385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d5820 with addr=10.0.0.2, port=4420 00:33:39.977 [2024-06-10 11:42:04.964403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d5820 is same with the state(5) to be set 00:33:39.977 [2024-06-10 11:42:04.964426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b6fa0 (9): Bad file descriptor 00:33:39.977 [2024-06-10 11:42:04.964487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.977 [2024-06-10 11:42:04.964509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.964535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.977 [2024-06-10 11:42:04.964554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.964590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.977 [2024-06-10 11:42:04.964608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.964629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.977 [2024-06-10 11:42:04.964648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.964669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.977 [2024-06-10 11:42:04.964687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.964707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.977 [2024-06-10 11:42:04.964726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.964750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.977 [2024-06-10 11:42:04.964768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.964789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.977 [2024-06-10 11:42:04.964806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.964827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.977 [2024-06-10 11:42:04.964846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.977 [2024-06-10 11:42:04.964866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.977 [2024-06-10 11:42:04.964885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.964905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.964924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.964944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.964961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.964982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.965966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.965984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.966005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.966022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.966043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.966061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.978 [2024-06-10 11:42:04.966082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.978 [2024-06-10 11:42:04.973232] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.978 [2024-06-10 11:42:04.973244] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.978 [2024-06-10 11:42:04.973254] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.978 [2024-06-10 11:42:04.973262] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.978 [2024-06-10 11:42:04.973271] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.978 [2024-06-10 11:42:04.973280] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.979 [2024-06-10 11:42:04.973293] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.979 [2024-06-10 11:42:04.973302] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.979 [2024-06-10 11:42:04.973311] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8f3e0 is same with the state(5) to be set 00:33:39.979 [2024-06-10 11:42:04.976916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.976951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.976971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.976993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.979 [2024-06-10 11:42:04.977854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.977874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1537860 is same with the state(5) to be set 00:33:39.979 [2024-06-10 11:42:04.977952] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1537860 was disconnected and freed. reset controller. 00:33:39.979 [2024-06-10 11:42:04.978044] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:33:39.979 [2024-06-10 11:42:04.978114] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:33:39.979 [2024-06-10 11:42:04.978175] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:33:39.979 [2024-06-10 11:42:04.978344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158fd70 (9): Bad file descriptor 00:33:39.979 [2024-06-10 11:42:04.978373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5820 (9): Bad file descriptor 00:33:39.979 [2024-06-10 11:42:04.978394] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:33:39.979 [2024-06-10 11:42:04.978411] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:33:39.979 [2024-06-10 11:42:04.978431] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:33:39.979 [2024-06-10 11:42:04.978476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1492d80 (9): Bad file descriptor 00:33:39.979 [2024-06-10 11:42:04.978513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159f1b0 (9): Bad file descriptor 00:33:39.979 [2024-06-10 11:42:04.978569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.979 [2024-06-10 11:42:04.978606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.978626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.979 [2024-06-10 11:42:04.978649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.978670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.979 [2024-06-10 11:42:04.978690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.978711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:39.979 [2024-06-10 11:42:04.978730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.979 [2024-06-10 11:42:04.978748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b7520 is same with the state(5) to be set 00:33:39.979 [2024-06-10 11:42:04.978779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1400d10 (9): Bad file descriptor 00:33:39.979 [2024-06-10 11:42:04.978809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d36b0 (9): Bad file descriptor 00:33:39.979 [2024-06-10 11:42:04.978841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159d8a0 (9): Bad file descriptor 00:33:39.979 [2024-06-10 11:42:04.978872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9150 (9): Bad file descriptor 00:33:39.979 [2024-06-10 11:42:04.978903] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.979 [2024-06-10 11:42:04.978929] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.979 [2024-06-10 11:42:04.978951] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.979 [2024-06-10 11:42:04.980602] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:33:39.979 [2024-06-10 11:42:04.980711] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.979 [2024-06-10 11:42:04.980735] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:33:39.979 [2024-06-10 11:42:04.980773] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:33:39.980 [2024-06-10 11:42:04.980792] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:33:39.980 [2024-06-10 11:42:04.980811] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:33:39.980 [2024-06-10 11:42:04.980835] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.980 [2024-06-10 11:42:04.980852] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.980 [2024-06-10 11:42:04.980869] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.980 [2024-06-10 11:42:04.981022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.980 [2024-06-10 11:42:04.981043] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.980 [2024-06-10 11:42:04.981254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.980 [2024-06-10 11:42:04.981282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1400d10 with addr=10.0.0.2, port=4420 00:33:39.980 [2024-06-10 11:42:04.981301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1400d10 is same with the state(5) to be set 00:33:39.980 [2024-06-10 11:42:04.981782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1400d10 (9): Bad file descriptor 00:33:39.980 [2024-06-10 11:42:04.981897] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:33:39.980 [2024-06-10 11:42:04.981923] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:33:39.980 [2024-06-10 11:42:04.981942] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:33:39.980 [2024-06-10 11:42:04.981960] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:33:39.980 [2024-06-10 11:42:04.982048] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.980 [2024-06-10 11:42:04.988378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b7520 (9): Bad file descriptor 00:33:39.980 [2024-06-10 11:42:04.988594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.988617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.988646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.988665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.988687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.988706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.988728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.988747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.988768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.988793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.988814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.988832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.988853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.988872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.988893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.988911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.988933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.988951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.988972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.988990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.980 [2024-06-10 11:42:04.989846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.980 [2024-06-10 11:42:04.989869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.989887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.989909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.989927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.989948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.989966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.989986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.990976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.990998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.991016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.991038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.991056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.991078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.991097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.991117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.991136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.991157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.991176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.991196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1484c60 is same with the state(5) to be set 00:33:39.981 [2024-06-10 11:42:04.992824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.992856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.992882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.992901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.992924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.992948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.992969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.992987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.993010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.993029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.981 [2024-06-10 11:42:04.993050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.981 [2024-06-10 11:42:04.993070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.993963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.993988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.982 [2024-06-10 11:42:04.994635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.982 [2024-06-10 11:42:04.994653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.994675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.994693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.994715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.994733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.994754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.994773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.994794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.994814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.994835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.994853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.994874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.994892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.994914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.994932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.994953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.994971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.994995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.995013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.995033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.995052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.995073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.995091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.995112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.995130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.995152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.995169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.995192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.995212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.995233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.995252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.995273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.995292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.995313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.995333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.995355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.995373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.995395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.995412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.995432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153a290 is same with the state(5) to be set 00:33:39.983 [2024-06-10 11:42:04.997043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.983 [2024-06-10 11:42:04.997617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.983 [2024-06-10 11:42:04.997638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.997657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.997678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.997696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.997718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.997736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.997757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.997775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.997797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.997815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.997836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.997855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.997875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.997895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.997915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.997934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.997954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.997971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.997993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.998969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.998986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.999008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.999026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.999047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.999066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.999086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.999105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.999129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.984 [2024-06-10 11:42:04.999148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.984 [2024-06-10 11:42:04.999169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:04.999188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:04.999209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:04.999226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:04.999248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:04.999266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:04.999286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:04.999305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:04.999325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:04.999345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:04.999366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:04.999386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:04.999407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:04.999424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:04.999446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:04.999465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:04.999485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:04.999504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:04.999525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:04.999546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:04.999567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:04.999592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:04.999614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:04.999635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:04.999655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cdb30 is same with the state(5) to be set 00:33:39.985 [2024-06-10 11:42:05.001269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.001962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.001984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.002002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.002022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.002042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.002063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.002081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.002102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.002121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.002142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.002160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.002182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.002200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.002221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.002239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.002263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.002282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.002303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.002323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.002343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.002362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.985 [2024-06-10 11:42:05.002383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.985 [2024-06-10 11:42:05.002401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.002972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.002993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.003861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.003881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf030 is same with the state(5) to be set 00:33:39.986 [2024-06-10 11:42:05.005711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.005743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.005770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.986 [2024-06-10 11:42:05.005789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.986 [2024-06-10 11:42:05.005811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.005830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.005850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.005870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.005891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.005909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.005929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.005947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.005969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.005986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.006969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.006990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.007009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.007030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.007049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.007071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.007090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.007112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.007130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.007155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.007174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.987 [2024-06-10 11:42:05.007196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.987 [2024-06-10 11:42:05.007215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.007973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.007985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.008000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.008013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.008028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.008040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.008058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.008072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.008087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.008099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.008114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.008127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.008142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.988 [2024-06-10 11:42:05.008155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.988 [2024-06-10 11:42:05.008169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d0530 is same with the state(5) to be set 00:33:39.988 [2024-06-10 11:42:05.009467] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:33:39.988 [2024-06-10 11:42:05.009491] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.988 [2024-06-10 11:42:05.009507] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:33:39.988 [2024-06-10 11:42:05.009523] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:33:39.988 [2024-06-10 11:42:05.009538] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:33:39.988 [2024-06-10 11:42:05.009630] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.988 [2024-06-10 11:42:05.009651] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.988 [2024-06-10 11:42:05.009679] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.988 [2024-06-10 11:42:05.009758] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:33:39.988 [2024-06-10 11:42:05.009776] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:33:39.988 [2024-06-10 11:42:05.009790] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:33:39.988 [2024-06-10 11:42:05.010074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-06-10 11:42:05.010096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b6fa0 with addr=10.0.0.2, port=4420 00:33:39.988 [2024-06-10 11:42:05.010111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b6fa0 is same with the state(5) to be set 00:33:39.988 [2024-06-10 11:42:05.010307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-06-10 11:42:05.010324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d5820 with addr=10.0.0.2, port=4420 00:33:39.988 [2024-06-10 11:42:05.010338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d5820 is same with the state(5) to be set 00:33:39.988 [2024-06-10 11:42:05.010570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-06-10 11:42:05.010598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x158fd70 with addr=10.0.0.2, port=4420 00:33:39.988 [2024-06-10 11:42:05.010611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158fd70 is same with the state(5) to be set 00:33:39.988 [2024-06-10 11:42:05.010855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-06-10 11:42:05.010871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d36b0 with addr=10.0.0.2, port=4420 00:33:39.988 [2024-06-10 11:42:05.010884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d36b0 is same with the state(5) to be set 00:33:39.988 [2024-06-10 11:42:05.011137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.988 [2024-06-10 11:42:05.011153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159d8a0 with addr=10.0.0.2, port=4420 00:33:39.988 [2024-06-10 11:42:05.011165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159d8a0 is same with the state(5) to be set 00:33:39.988 [2024-06-10 11:42:05.012623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.012643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.012661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.012675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.012690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.012702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.012718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.012731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.012745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.012758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.012772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.012785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.012800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.012812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.012826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.012839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.012854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.012866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.012880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.012893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.012912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.012924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.012939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.012951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.012966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.012979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.012993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.989 [2024-06-10 11:42:05.013583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.989 [2024-06-10 11:42:05.013599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.013985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.013997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.990 [2024-06-10 11:42:05.014385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:39.990 [2024-06-10 11:42:05.014399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1900 is same with the state(5) to be set 00:33:39.990 [2024-06-10 11:42:05.016560] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:33:39.990 task offset: 26752 on job bdev=Nvme1n1 fails 00:33:39.990 00:33:39.990 Latency(us) 00:33:39.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.990 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:39.990 Job: Nvme1n1 ended in about 0.95 seconds with error 00:33:39.990 Verification LBA range: start 0x0 length 0x400 00:33:39.990 Nvme1n1 : 0.95 201.71 12.61 67.24 0.00 235035.80 5898.24 276824.06 00:33:39.990 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:39.990 Job: Nvme2n1 ended in about 1.00 seconds with error 00:33:39.990 Verification LBA range: start 0x0 length 0x400 00:33:39.990 Nvme2n1 : 1.00 128.00 8.00 64.00 0.00 322585.12 34603.01 291923.56 00:33:39.990 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:39.990 Job: Nvme3n1 ended in about 0.99 seconds with error 00:33:39.990 Verification LBA range: start 0x0 length 0x400 00:33:39.990 Nvme3n1 : 0.99 194.37 12.15 64.79 0.00 233593.45 20342.37 265080.01 00:33:39.990 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:39.990 Job: Nvme4n1 ended in about 0.97 seconds with error 00:33:39.990 Verification LBA range: start 0x0 length 0x400 00:33:39.990 Nvme4n1 : 0.97 198.15 12.38 66.05 0.00 223687.27 10328.47 275146.34 00:33:39.990 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:39.990 Job: Nvme5n1 ended in about 1.00 seconds with error 00:33:39.990 Verification LBA range: start 0x0 length 0x400 00:33:39.990 Nvme5n1 : 1.00 127.46 7.97 63.73 0.00 303176.64 25165.82 266757.73 00:33:39.990 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:39.990 Job: Nvme6n1 ended in about 1.01 seconds with error 00:33:39.990 Verification LBA range: start 0x0 length 0x400 00:33:39.990 Nvme6n1 : 1.01 126.93 7.93 63.47 0.00 297731.69 21600.67 285212.67 00:33:39.990 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:39.990 Job: Nvme7n1 ended in about 1.01 seconds with error 00:33:39.990 Verification LBA range: start 0x0 length 0x400 00:33:39.990 Nvme7n1 : 1.01 126.40 7.90 63.20 0.00 292212.19 23278.39 303667.61 00:33:39.990 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:39.990 Job: Nvme8n1 ended in about 1.02 seconds with error 00:33:39.990 Verification LBA range: start 0x0 length 0x400 00:33:39.990 Nvme8n1 : 1.02 125.88 7.87 62.94 0.00 286707.44 24431.82 276824.06 00:33:39.990 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:39.991 Job: Nvme9n1 ended in about 1.02 seconds with error 00:33:39.991 Verification LBA range: start 0x0 length 0x400 00:33:39.991 Nvme9n1 : 1.02 125.11 7.82 62.56 0.00 281769.85 23173.53 271790.90 00:33:39.991 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:39.991 Job: Nvme10n1 ended in about 0.97 seconds with error 00:33:39.991 Verification LBA range: start 0x0 length 0x400 00:33:39.991 Nvme10n1 : 0.97 132.53 8.28 66.26 0.00 255759.16 14365.49 305345.33 00:33:39.991 =================================================================================================================== 00:33:39.991 Total : 1486.56 92.91 644.24 0.00 269366.43 5898.24 305345.33 00:33:39.991 [2024-06-10 11:42:05.042824] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:39.991 [2024-06-10 11:42:05.042868] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:33:39.991 [2024-06-10 11:42:05.043130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.991 [2024-06-10 11:42:05.043155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1492d80 with addr=10.0.0.2, port=4420 00:33:39.991 [2024-06-10 11:42:05.043172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1492d80 is same with the state(5) to be set 00:33:39.991 [2024-06-10 11:42:05.043415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.991 [2024-06-10 11:42:05.043433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9150 with addr=10.0.0.2, port=4420 00:33:39.991 [2024-06-10 11:42:05.043445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9150 is same with the state(5) to be set 00:33:39.991 [2024-06-10 11:42:05.043648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.991 [2024-06-10 11:42:05.043666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159f1b0 with addr=10.0.0.2, port=4420 00:33:39.991 [2024-06-10 11:42:05.043680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159f1b0 is same with the state(5) to be set 00:33:39.991 [2024-06-10 11:42:05.043700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b6fa0 (9): Bad file descriptor 00:33:39.991 [2024-06-10 11:42:05.043717] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d5820 (9): Bad file descriptor 00:33:39.991 [2024-06-10 11:42:05.043733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158fd70 (9): Bad file descriptor 00:33:39.991 [2024-06-10 11:42:05.043749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d36b0 (9): Bad file descriptor 00:33:39.991 [2024-06-10 11:42:05.043764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159d8a0 (9): Bad file descriptor 00:33:39.991 [2024-06-10 11:42:05.044095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.991 [2024-06-10 11:42:05.044117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1400d10 with addr=10.0.0.2, port=4420 00:33:39.991 [2024-06-10 11:42:05.044131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1400d10 is same with the state(5) to be set 00:33:39.991 [2024-06-10 11:42:05.044441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.991 [2024-06-10 11:42:05.044458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b7520 with addr=10.0.0.2, port=4420 00:33:39.991 [2024-06-10 11:42:05.044471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b7520 is same with the state(5) to be set 00:33:39.991 [2024-06-10 11:42:05.044487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1492d80 (9): Bad file descriptor 00:33:39.991 [2024-06-10 11:42:05.044510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9150 (9): Bad file descriptor 00:33:39.991 [2024-06-10 11:42:05.044526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159f1b0 (9): Bad file descriptor 00:33:39.991 [2024-06-10 11:42:05.044541] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:33:39.991 [2024-06-10 11:42:05.044553] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:33:39.991 [2024-06-10 11:42:05.044566] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:33:39.991 [2024-06-10 11:42:05.044592] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.991 [2024-06-10 11:42:05.044604] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.991 [2024-06-10 11:42:05.044616] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.991 [2024-06-10 11:42:05.044631] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:33:39.991 [2024-06-10 11:42:05.044643] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:33:39.991 [2024-06-10 11:42:05.044656] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:33:39.991 [2024-06-10 11:42:05.044670] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:33:39.991 [2024-06-10 11:42:05.044682] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:33:39.991 [2024-06-10 11:42:05.044695] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:33:39.991 [2024-06-10 11:42:05.044711] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:33:39.991 [2024-06-10 11:42:05.044723] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:33:39.991 [2024-06-10 11:42:05.044735] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:33:39.991 [2024-06-10 11:42:05.044763] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.991 [2024-06-10 11:42:05.044781] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.991 [2024-06-10 11:42:05.044798] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.991 [2024-06-10 11:42:05.044815] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.991 [2024-06-10 11:42:05.044831] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.991 [2024-06-10 11:42:05.044848] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.991 [2024-06-10 11:42:05.044864] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.991 [2024-06-10 11:42:05.044881] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.991 [2024-06-10 11:42:05.045266] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.991 [2024-06-10 11:42:05.045282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.991 [2024-06-10 11:42:05.045293] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.991 [2024-06-10 11:42:05.045304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.991 [2024-06-10 11:42:05.045315] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.991 [2024-06-10 11:42:05.045331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1400d10 (9): Bad file descriptor 00:33:39.991 [2024-06-10 11:42:05.045349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b7520 (9): Bad file descriptor 00:33:39.991 [2024-06-10 11:42:05.045363] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:33:39.991 [2024-06-10 11:42:05.045375] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:33:39.991 [2024-06-10 11:42:05.045387] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:33:39.991 [2024-06-10 11:42:05.045403] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:33:39.991 [2024-06-10 11:42:05.045414] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:33:39.991 [2024-06-10 11:42:05.045426] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:33:39.991 [2024-06-10 11:42:05.045440] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:33:39.991 [2024-06-10 11:42:05.045452] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:33:39.991 [2024-06-10 11:42:05.045464] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:33:39.991 [2024-06-10 11:42:05.046049] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.991 [2024-06-10 11:42:05.046070] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.991 [2024-06-10 11:42:05.046082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.991 [2024-06-10 11:42:05.046094] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:33:39.991 [2024-06-10 11:42:05.046106] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:33:39.991 [2024-06-10 11:42:05.046119] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:33:39.991 [2024-06-10 11:42:05.046135] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:33:39.991 [2024-06-10 11:42:05.046147] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:33:39.991 [2024-06-10 11:42:05.046158] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:33:39.991 [2024-06-10 11:42:05.046204] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.991 [2024-06-10 11:42:05.046217] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.560 11:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:33:40.560 11:42:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 4061580 00:33:41.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (4061580) - No such process 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:41.496 rmmod nvme_tcp 00:33:41.496 rmmod nvme_fabrics 00:33:41.496 rmmod nvme_keyring 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:41.496 11:42:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.033 11:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:44.033 00:33:44.033 real 0m8.358s 00:33:44.033 user 0m20.729s 00:33:44.033 sys 0m1.797s 00:33:44.033 11:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:44.033 11:42:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:33:44.033 ************************************ 00:33:44.033 END TEST nvmf_shutdown_tc3 00:33:44.033 ************************************ 00:33:44.033 11:42:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:33:44.033 00:33:44.033 real 0m35.552s 00:33:44.033 user 1m22.764s 00:33:44.033 sys 0m12.087s 00:33:44.033 11:42:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:44.033 11:42:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:44.033 ************************************ 00:33:44.033 END TEST nvmf_shutdown 00:33:44.033 ************************************ 00:33:44.033 11:42:08 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:33:44.033 11:42:08 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:44.033 11:42:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:44.033 11:42:08 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:33:44.033 11:42:08 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:44.033 11:42:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:44.033 11:42:08 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:33:44.033 11:42:08 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:33:44.033 11:42:08 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:44.033 11:42:08 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:44.033 11:42:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:44.033 ************************************ 00:33:44.033 START TEST nvmf_multicontroller 00:33:44.033 ************************************ 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:33:44.033 * Looking for test storage... 00:33:44.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:33:44.033 11:42:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:52.156 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:52.156 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:52.156 Found net devices under 0000:af:00.0: cvl_0_0 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:52.156 Found net devices under 0000:af:00.1: cvl_0_1 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:52.156 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:52.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:33:52.416 00:33:52.416 --- 10.0.0.2 ping statistics --- 00:33:52.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.416 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:33:52.416 00:33:52.416 --- 10.0.0.1 ping statistics --- 00:33:52.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.416 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:52.416 11:42:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:33:52.417 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:52.417 11:42:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:52.417 11:42:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:52.417 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=4067054 00:33:52.417 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:52.417 11:42:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 4067054 00:33:52.417 11:42:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 4067054 ']' 00:33:52.417 11:42:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.417 11:42:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:52.417 11:42:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.417 11:42:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:52.417 11:42:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:52.417 [2024-06-10 11:42:17.409007] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:33:52.417 [2024-06-10 11:42:17.409066] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:52.417 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.675 [2024-06-10 11:42:17.527880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:52.676 [2024-06-10 11:42:17.612799] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.676 [2024-06-10 11:42:17.612840] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.676 [2024-06-10 11:42:17.612853] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.676 [2024-06-10 11:42:17.612866] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.676 [2024-06-10 11:42:17.612876] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.676 [2024-06-10 11:42:17.612982] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:33:52.676 [2024-06-10 11:42:17.613093] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.676 [2024-06-10 11:42:17.613093] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:33:53.242 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:53.242 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:33:53.242 11:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:53.242 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:53.242 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:53.501 [2024-06-10 11:42:18.377124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:53.501 Malloc0 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:53.501 [2024-06-10 11:42:18.442402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:53.501 [2024-06-10 11:42:18.450323] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:53.501 Malloc1 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:53.501 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.502 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4067334 00:33:53.502 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:33:53.502 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:53.502 11:42:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4067334 /var/tmp/bdevperf.sock 00:33:53.502 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 4067334 ']' 00:33:53.502 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:53.502 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:53.502 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:53.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:53.502 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:53.502 11:42:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:54.440 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:54.440 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:33:54.440 11:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:33:54.440 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.440 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:54.699 NVMe0n1 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.699 1 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:54.699 request: 00:33:54.699 { 00:33:54.699 "name": "NVMe0", 00:33:54.699 "trtype": "tcp", 00:33:54.699 "traddr": "10.0.0.2", 00:33:54.699 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:33:54.699 "hostaddr": "10.0.0.2", 00:33:54.699 "hostsvcid": "60000", 00:33:54.699 "adrfam": "ipv4", 00:33:54.699 "trsvcid": "4420", 00:33:54.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.699 "method": "bdev_nvme_attach_controller", 00:33:54.699 "req_id": 1 00:33:54.699 } 00:33:54.699 Got JSON-RPC error response 00:33:54.699 response: 00:33:54.699 { 00:33:54.699 "code": -114, 00:33:54.699 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:33:54.699 } 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:33:54.699 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:54.700 request: 00:33:54.700 { 00:33:54.700 "name": "NVMe0", 00:33:54.700 "trtype": "tcp", 00:33:54.700 "traddr": "10.0.0.2", 00:33:54.700 "hostaddr": "10.0.0.2", 00:33:54.700 "hostsvcid": "60000", 00:33:54.700 "adrfam": "ipv4", 00:33:54.700 "trsvcid": "4420", 00:33:54.700 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:54.700 "method": "bdev_nvme_attach_controller", 00:33:54.700 "req_id": 1 00:33:54.700 } 00:33:54.700 Got JSON-RPC error response 00:33:54.700 response: 00:33:54.700 { 00:33:54.700 "code": -114, 00:33:54.700 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:33:54.700 } 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:54.700 request: 00:33:54.700 { 00:33:54.700 "name": "NVMe0", 00:33:54.700 "trtype": "tcp", 00:33:54.700 "traddr": "10.0.0.2", 00:33:54.700 "hostaddr": "10.0.0.2", 00:33:54.700 "hostsvcid": "60000", 00:33:54.700 "adrfam": "ipv4", 00:33:54.700 "trsvcid": "4420", 00:33:54.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.700 "multipath": "disable", 00:33:54.700 "method": "bdev_nvme_attach_controller", 00:33:54.700 "req_id": 1 00:33:54.700 } 00:33:54.700 Got JSON-RPC error response 00:33:54.700 response: 00:33:54.700 { 00:33:54.700 "code": -114, 00:33:54.700 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:33:54.700 } 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:54.700 request: 00:33:54.700 { 00:33:54.700 "name": "NVMe0", 00:33:54.700 "trtype": "tcp", 00:33:54.700 "traddr": "10.0.0.2", 00:33:54.700 "hostaddr": "10.0.0.2", 00:33:54.700 "hostsvcid": "60000", 00:33:54.700 "adrfam": "ipv4", 00:33:54.700 "trsvcid": "4420", 00:33:54.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.700 "multipath": "failover", 00:33:54.700 "method": "bdev_nvme_attach_controller", 00:33:54.700 "req_id": 1 00:33:54.700 } 00:33:54.700 Got JSON-RPC error response 00:33:54.700 response: 00:33:54.700 { 00:33:54.700 "code": -114, 00:33:54.700 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:33:54.700 } 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:54.700 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.700 11:42:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:54.959 00:33:54.959 11:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.959 11:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:54.959 11:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:54.959 11:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:33:54.959 11:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:54.959 11:42:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:54.959 11:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:33:54.959 11:42:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:56.335 0 00:33:56.335 11:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:33:56.335 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:56.335 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:56.335 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:56.335 11:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 4067334 00:33:56.336 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 4067334 ']' 00:33:56.336 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 4067334 00:33:56.336 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:33:56.336 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:56.336 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4067334 00:33:56.336 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:56.336 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:56.336 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4067334' 00:33:56.336 killing process with pid 4067334 00:33:56.336 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 4067334 00:33:56.336 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 4067334 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:33:56.594 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:33:56.594 [2024-06-10 11:42:18.558628] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:33:56.594 [2024-06-10 11:42:18.558695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4067334 ] 00:33:56.594 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.594 [2024-06-10 11:42:18.679689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.594 [2024-06-10 11:42:18.762072] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.594 [2024-06-10 11:42:20.018544] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name af171836-a41c-473a-a69b-6c7ea162726b already exists 00:33:56.594 [2024-06-10 11:42:20.018759] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:af171836-a41c-473a-a69b-6c7ea162726b alias for bdev NVMe1n1 00:33:56.594 [2024-06-10 11:42:20.018835] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:33:56.594 Running I/O for 1 seconds... 00:33:56.594 00:33:56.594 Latency(us) 00:33:56.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.594 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:33:56.594 NVMe0n1 : 1.01 18716.15 73.11 0.00 0.00 6820.70 4299.16 15623.78 00:33:56.594 =================================================================================================================== 00:33:56.594 Total : 18716.15 73.11 0.00 0.00 6820.70 4299.16 15623.78 00:33:56.594 Received shutdown signal, test time was about 1.000000 seconds 00:33:56.594 00:33:56.594 Latency(us) 00:33:56.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.594 =================================================================================================================== 00:33:56.594 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:56.594 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:56.594 rmmod nvme_tcp 00:33:56.594 rmmod nvme_fabrics 00:33:56.594 rmmod nvme_keyring 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 4067054 ']' 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 4067054 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 4067054 ']' 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 4067054 00:33:56.594 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:33:56.595 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:56.595 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4067054 00:33:56.595 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:33:56.595 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:33:56.595 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4067054' 00:33:56.595 killing process with pid 4067054 00:33:56.595 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 4067054 00:33:56.595 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 4067054 00:33:56.854 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:56.854 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:56.854 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:56.855 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:56.855 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:56.855 11:42:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.855 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:56.855 11:42:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.392 11:42:23 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:59.392 00:33:59.392 real 0m15.156s 00:33:59.392 user 0m18.114s 00:33:59.392 sys 0m7.533s 00:33:59.392 11:42:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:59.392 11:42:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:59.392 ************************************ 00:33:59.392 END TEST nvmf_multicontroller 00:33:59.392 ************************************ 00:33:59.392 11:42:23 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:33:59.392 11:42:23 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:59.392 11:42:23 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:59.393 11:42:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:59.393 ************************************ 00:33:59.393 START TEST nvmf_aer 00:33:59.393 ************************************ 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:33:59.393 * Looking for test storage... 00:33:59.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:33:59.393 11:42:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:07.519 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:07.519 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:07.519 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:07.520 Found net devices under 0000:af:00.0: cvl_0_0 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:07.520 Found net devices under 0000:af:00.1: cvl_0_1 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:07.520 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.779 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.779 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.779 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:07.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:34:07.780 00:34:07.780 --- 10.0.0.2 ping statistics --- 00:34:07.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.780 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:34:07.780 00:34:07.780 --- 10.0.0.1 ping statistics --- 00:34:07.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.780 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=4072311 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 4072311 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 4072311 ']' 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:07.780 11:42:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:07.780 [2024-06-10 11:42:32.842844] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:34:07.780 [2024-06-10 11:42:32.842904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.038 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.038 [2024-06-10 11:42:32.971993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:08.038 [2024-06-10 11:42:33.059403] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:08.038 [2024-06-10 11:42:33.059449] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:08.038 [2024-06-10 11:42:33.059462] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:08.038 [2024-06-10 11:42:33.059475] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:08.038 [2024-06-10 11:42:33.059485] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:08.038 [2024-06-10 11:42:33.059592] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.038 [2024-06-10 11:42:33.059651] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:08.038 [2024-06-10 11:42:33.059770] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:08.038 [2024-06-10 11:42:33.059770] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:34:08.970 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:08.971 [2024-06-10 11:42:33.808007] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:08.971 Malloc0 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:08.971 [2024-06-10 11:42:33.863678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:08.971 [ 00:34:08.971 { 00:34:08.971 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:08.971 "subtype": "Discovery", 00:34:08.971 "listen_addresses": [], 00:34:08.971 "allow_any_host": true, 00:34:08.971 "hosts": [] 00:34:08.971 }, 00:34:08.971 { 00:34:08.971 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:08.971 "subtype": "NVMe", 00:34:08.971 "listen_addresses": [ 00:34:08.971 { 00:34:08.971 "trtype": "TCP", 00:34:08.971 "adrfam": "IPv4", 00:34:08.971 "traddr": "10.0.0.2", 00:34:08.971 "trsvcid": "4420" 00:34:08.971 } 00:34:08.971 ], 00:34:08.971 "allow_any_host": true, 00:34:08.971 "hosts": [], 00:34:08.971 "serial_number": "SPDK00000000000001", 00:34:08.971 "model_number": "SPDK bdev Controller", 00:34:08.971 "max_namespaces": 2, 00:34:08.971 "min_cntlid": 1, 00:34:08.971 "max_cntlid": 65519, 00:34:08.971 "namespaces": [ 00:34:08.971 { 00:34:08.971 "nsid": 1, 00:34:08.971 "bdev_name": "Malloc0", 00:34:08.971 "name": "Malloc0", 00:34:08.971 "nguid": "8211612A4420423D90D1207B1623F0A8", 00:34:08.971 "uuid": "8211612a-4420-423d-90d1-207b1623f0a8" 00:34:08.971 } 00:34:08.971 ] 00:34:08.971 } 00:34:08.971 ] 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=4072516 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:34:08.971 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:34:08.971 11:42:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 2 -lt 200 ']' 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=3 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.227 Malloc1 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.227 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.227 Asynchronous Event Request test 00:34:09.227 Attaching to 10.0.0.2 00:34:09.227 Attached to 10.0.0.2 00:34:09.227 Registering asynchronous event callbacks... 00:34:09.227 Starting namespace attribute notice tests for all controllers... 00:34:09.227 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:34:09.227 aer_cb - Changed Namespace 00:34:09.227 Cleaning up... 00:34:09.227 [ 00:34:09.227 { 00:34:09.227 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:09.227 "subtype": "Discovery", 00:34:09.227 "listen_addresses": [], 00:34:09.227 "allow_any_host": true, 00:34:09.227 "hosts": [] 00:34:09.227 }, 00:34:09.227 { 00:34:09.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:09.227 "subtype": "NVMe", 00:34:09.227 "listen_addresses": [ 00:34:09.227 { 00:34:09.227 "trtype": "TCP", 00:34:09.227 "adrfam": "IPv4", 00:34:09.227 "traddr": "10.0.0.2", 00:34:09.227 "trsvcid": "4420" 00:34:09.227 } 00:34:09.227 ], 00:34:09.227 "allow_any_host": true, 00:34:09.227 "hosts": [], 00:34:09.227 "serial_number": "SPDK00000000000001", 00:34:09.227 "model_number": "SPDK bdev Controller", 00:34:09.227 "max_namespaces": 2, 00:34:09.227 "min_cntlid": 1, 00:34:09.227 "max_cntlid": 65519, 00:34:09.227 "namespaces": [ 00:34:09.227 { 00:34:09.227 "nsid": 1, 00:34:09.227 "bdev_name": "Malloc0", 00:34:09.227 "name": "Malloc0", 00:34:09.227 "nguid": "8211612A4420423D90D1207B1623F0A8", 00:34:09.227 "uuid": "8211612a-4420-423d-90d1-207b1623f0a8" 00:34:09.227 }, 00:34:09.227 { 00:34:09.227 "nsid": 2, 00:34:09.227 "bdev_name": "Malloc1", 00:34:09.227 "name": "Malloc1", 00:34:09.227 "nguid": "B9A76E71E7BA4668942EE75186202CF4", 00:34:09.228 "uuid": "b9a76e71-e7ba-4668-942e-e75186202cf4" 00:34:09.228 } 00:34:09.228 ] 00:34:09.228 } 00:34:09.228 ] 00:34:09.228 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.228 11:42:34 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 4072516 00:34:09.228 11:42:34 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:34:09.228 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.228 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.228 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.228 11:42:34 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:34:09.228 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.228 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:09.485 rmmod nvme_tcp 00:34:09.485 rmmod nvme_fabrics 00:34:09.485 rmmod nvme_keyring 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 4072311 ']' 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 4072311 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 4072311 ']' 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 4072311 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4072311 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4072311' 00:34:09.485 killing process with pid 4072311 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 4072311 00:34:09.485 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 4072311 00:34:09.744 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:09.744 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:09.745 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:09.745 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:09.745 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:09.745 11:42:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.745 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:09.745 11:42:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.697 11:42:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:11.697 00:34:11.697 real 0m12.729s 00:34:11.697 user 0m8.876s 00:34:11.697 sys 0m7.167s 00:34:11.697 11:42:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:11.697 11:42:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:11.697 ************************************ 00:34:11.697 END TEST nvmf_aer 00:34:11.697 ************************************ 00:34:11.992 11:42:36 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:34:11.992 11:42:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:11.992 11:42:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:11.992 11:42:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:11.992 ************************************ 00:34:11.992 START TEST nvmf_async_init 00:34:11.992 ************************************ 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:34:11.992 * Looking for test storage... 00:34:11.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3c7c58ef732942bb9b1be2ee8a4d6485 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:34:11.992 11:42:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:21.979 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:21.979 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:21.979 Found net devices under 0000:af:00.0: cvl_0_0 00:34:21.979 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:21.980 Found net devices under 0000:af:00.1: cvl_0_1 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:21.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:21.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:34:21.980 00:34:21.980 --- 10.0.0.2 ping statistics --- 00:34:21.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.980 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:21.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:21.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:34:21.980 00:34:21.980 --- 10.0.0.1 ping statistics --- 00:34:21.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.980 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=4077037 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 4077037 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 4077037 ']' 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:21.980 11:42:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:21.980 [2024-06-10 11:42:45.703597] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:34:21.980 [2024-06-10 11:42:45.703664] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:21.980 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.980 [2024-06-10 11:42:45.831201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.980 [2024-06-10 11:42:45.914608] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:21.980 [2024-06-10 11:42:45.914651] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:21.980 [2024-06-10 11:42:45.914664] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:21.980 [2024-06-10 11:42:45.914676] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:21.980 [2024-06-10 11:42:45.914685] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:21.980 [2024-06-10 11:42:45.914719] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.980 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:21.980 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:34:21.980 11:42:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:21.980 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:21.980 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:21.981 [2024-06-10 11:42:46.659128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:21.981 null0 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3c7c58ef732942bb9b1be2ee8a4d6485 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:21.981 [2024-06-10 11:42:46.703380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:21.981 nvme0n1 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:21.981 [ 00:34:21.981 { 00:34:21.981 "name": "nvme0n1", 00:34:21.981 "aliases": [ 00:34:21.981 "3c7c58ef-7329-42bb-9b1b-e2ee8a4d6485" 00:34:21.981 ], 00:34:21.981 "product_name": "NVMe disk", 00:34:21.981 "block_size": 512, 00:34:21.981 "num_blocks": 2097152, 00:34:21.981 "uuid": "3c7c58ef-7329-42bb-9b1b-e2ee8a4d6485", 00:34:21.981 "assigned_rate_limits": { 00:34:21.981 "rw_ios_per_sec": 0, 00:34:21.981 "rw_mbytes_per_sec": 0, 00:34:21.981 "r_mbytes_per_sec": 0, 00:34:21.981 "w_mbytes_per_sec": 0 00:34:21.981 }, 00:34:21.981 "claimed": false, 00:34:21.981 "zoned": false, 00:34:21.981 "supported_io_types": { 00:34:21.981 "read": true, 00:34:21.981 "write": true, 00:34:21.981 "unmap": false, 00:34:21.981 "write_zeroes": true, 00:34:21.981 "flush": true, 00:34:21.981 "reset": true, 00:34:21.981 "compare": true, 00:34:21.981 "compare_and_write": true, 00:34:21.981 "abort": true, 00:34:21.981 "nvme_admin": true, 00:34:21.981 "nvme_io": true 00:34:21.981 }, 00:34:21.981 "memory_domains": [ 00:34:21.981 { 00:34:21.981 "dma_device_id": "system", 00:34:21.981 "dma_device_type": 1 00:34:21.981 } 00:34:21.981 ], 00:34:21.981 "driver_specific": { 00:34:21.981 "nvme": [ 00:34:21.981 { 00:34:21.981 "trid": { 00:34:21.981 "trtype": "TCP", 00:34:21.981 "adrfam": "IPv4", 00:34:21.981 "traddr": "10.0.0.2", 00:34:21.981 "trsvcid": "4420", 00:34:21.981 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:21.981 }, 00:34:21.981 "ctrlr_data": { 00:34:21.981 "cntlid": 1, 00:34:21.981 "vendor_id": "0x8086", 00:34:21.981 "model_number": "SPDK bdev Controller", 00:34:21.981 "serial_number": "00000000000000000000", 00:34:21.981 "firmware_revision": "24.09", 00:34:21.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:21.981 "oacs": { 00:34:21.981 "security": 0, 00:34:21.981 "format": 0, 00:34:21.981 "firmware": 0, 00:34:21.981 "ns_manage": 0 00:34:21.981 }, 00:34:21.981 "multi_ctrlr": true, 00:34:21.981 "ana_reporting": false 00:34:21.981 }, 00:34:21.981 "vs": { 00:34:21.981 "nvme_version": "1.3" 00:34:21.981 }, 00:34:21.981 "ns_data": { 00:34:21.981 "id": 1, 00:34:21.981 "can_share": true 00:34:21.981 } 00:34:21.981 } 00:34:21.981 ], 00:34:21.981 "mp_policy": "active_passive" 00:34:21.981 } 00:34:21.981 } 00:34:21.981 ] 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.981 11:42:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:21.981 [2024-06-10 11:42:46.976472] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:21.981 [2024-06-10 11:42:46.976542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f4a20 (9): Bad file descriptor 00:34:22.241 [2024-06-10 11:42:47.108690] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:22.241 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.241 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:34:22.241 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.241 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:22.241 [ 00:34:22.241 { 00:34:22.241 "name": "nvme0n1", 00:34:22.241 "aliases": [ 00:34:22.241 "3c7c58ef-7329-42bb-9b1b-e2ee8a4d6485" 00:34:22.241 ], 00:34:22.241 "product_name": "NVMe disk", 00:34:22.241 "block_size": 512, 00:34:22.241 "num_blocks": 2097152, 00:34:22.241 "uuid": "3c7c58ef-7329-42bb-9b1b-e2ee8a4d6485", 00:34:22.241 "assigned_rate_limits": { 00:34:22.241 "rw_ios_per_sec": 0, 00:34:22.241 "rw_mbytes_per_sec": 0, 00:34:22.241 "r_mbytes_per_sec": 0, 00:34:22.241 "w_mbytes_per_sec": 0 00:34:22.241 }, 00:34:22.241 "claimed": false, 00:34:22.241 "zoned": false, 00:34:22.241 "supported_io_types": { 00:34:22.241 "read": true, 00:34:22.241 "write": true, 00:34:22.241 "unmap": false, 00:34:22.241 "write_zeroes": true, 00:34:22.241 "flush": true, 00:34:22.241 "reset": true, 00:34:22.241 "compare": true, 00:34:22.241 "compare_and_write": true, 00:34:22.241 "abort": true, 00:34:22.241 "nvme_admin": true, 00:34:22.241 "nvme_io": true 00:34:22.241 }, 00:34:22.241 "memory_domains": [ 00:34:22.241 { 00:34:22.241 "dma_device_id": "system", 00:34:22.241 "dma_device_type": 1 00:34:22.241 } 00:34:22.241 ], 00:34:22.241 "driver_specific": { 00:34:22.241 "nvme": [ 00:34:22.241 { 00:34:22.241 "trid": { 00:34:22.241 "trtype": "TCP", 00:34:22.241 "adrfam": "IPv4", 00:34:22.241 "traddr": "10.0.0.2", 00:34:22.241 "trsvcid": "4420", 00:34:22.241 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:22.241 }, 00:34:22.241 "ctrlr_data": { 00:34:22.242 "cntlid": 2, 00:34:22.242 "vendor_id": "0x8086", 00:34:22.242 "model_number": "SPDK bdev Controller", 00:34:22.242 "serial_number": "00000000000000000000", 00:34:22.242 "firmware_revision": "24.09", 00:34:22.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:22.242 "oacs": { 00:34:22.242 "security": 0, 00:34:22.242 "format": 0, 00:34:22.242 "firmware": 0, 00:34:22.242 "ns_manage": 0 00:34:22.242 }, 00:34:22.242 "multi_ctrlr": true, 00:34:22.242 "ana_reporting": false 00:34:22.242 }, 00:34:22.242 "vs": { 00:34:22.242 "nvme_version": "1.3" 00:34:22.242 }, 00:34:22.242 "ns_data": { 00:34:22.242 "id": 1, 00:34:22.242 "can_share": true 00:34:22.242 } 00:34:22.242 } 00:34:22.242 ], 00:34:22.242 "mp_policy": "active_passive" 00:34:22.242 } 00:34:22.242 } 00:34:22.242 ] 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.rhB4Ba1DLs 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.rhB4Ba1DLs 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:22.242 [2024-06-10 11:42:47.181137] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:22.242 [2024-06-10 11:42:47.181279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rhB4Ba1DLs 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:22.242 [2024-06-10 11:42:47.189157] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rhB4Ba1DLs 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:22.242 [2024-06-10 11:42:47.201191] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:22.242 [2024-06-10 11:42:47.201238] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:34:22.242 nvme0n1 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:22.242 [ 00:34:22.242 { 00:34:22.242 "name": "nvme0n1", 00:34:22.242 "aliases": [ 00:34:22.242 "3c7c58ef-7329-42bb-9b1b-e2ee8a4d6485" 00:34:22.242 ], 00:34:22.242 "product_name": "NVMe disk", 00:34:22.242 "block_size": 512, 00:34:22.242 "num_blocks": 2097152, 00:34:22.242 "uuid": "3c7c58ef-7329-42bb-9b1b-e2ee8a4d6485", 00:34:22.242 "assigned_rate_limits": { 00:34:22.242 "rw_ios_per_sec": 0, 00:34:22.242 "rw_mbytes_per_sec": 0, 00:34:22.242 "r_mbytes_per_sec": 0, 00:34:22.242 "w_mbytes_per_sec": 0 00:34:22.242 }, 00:34:22.242 "claimed": false, 00:34:22.242 "zoned": false, 00:34:22.242 "supported_io_types": { 00:34:22.242 "read": true, 00:34:22.242 "write": true, 00:34:22.242 "unmap": false, 00:34:22.242 "write_zeroes": true, 00:34:22.242 "flush": true, 00:34:22.242 "reset": true, 00:34:22.242 "compare": true, 00:34:22.242 "compare_and_write": true, 00:34:22.242 "abort": true, 00:34:22.242 "nvme_admin": true, 00:34:22.242 "nvme_io": true 00:34:22.242 }, 00:34:22.242 "memory_domains": [ 00:34:22.242 { 00:34:22.242 "dma_device_id": "system", 00:34:22.242 "dma_device_type": 1 00:34:22.242 } 00:34:22.242 ], 00:34:22.242 "driver_specific": { 00:34:22.242 "nvme": [ 00:34:22.242 { 00:34:22.242 "trid": { 00:34:22.242 "trtype": "TCP", 00:34:22.242 "adrfam": "IPv4", 00:34:22.242 "traddr": "10.0.0.2", 00:34:22.242 "trsvcid": "4421", 00:34:22.242 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:22.242 }, 00:34:22.242 "ctrlr_data": { 00:34:22.242 "cntlid": 3, 00:34:22.242 "vendor_id": "0x8086", 00:34:22.242 "model_number": "SPDK bdev Controller", 00:34:22.242 "serial_number": "00000000000000000000", 00:34:22.242 "firmware_revision": "24.09", 00:34:22.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:22.242 "oacs": { 00:34:22.242 "security": 0, 00:34:22.242 "format": 0, 00:34:22.242 "firmware": 0, 00:34:22.242 "ns_manage": 0 00:34:22.242 }, 00:34:22.242 "multi_ctrlr": true, 00:34:22.242 "ana_reporting": false 00:34:22.242 }, 00:34:22.242 "vs": { 00:34:22.242 "nvme_version": "1.3" 00:34:22.242 }, 00:34:22.242 "ns_data": { 00:34:22.242 "id": 1, 00:34:22.242 "can_share": true 00:34:22.242 } 00:34:22.242 } 00:34:22.242 ], 00:34:22.242 "mp_policy": "active_passive" 00:34:22.242 } 00:34:22.242 } 00:34:22.242 ] 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.rhB4Ba1DLs 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:22.242 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:22.242 rmmod nvme_tcp 00:34:22.502 rmmod nvme_fabrics 00:34:22.502 rmmod nvme_keyring 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 4077037 ']' 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 4077037 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 4077037 ']' 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 4077037 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4077037 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4077037' 00:34:22.502 killing process with pid 4077037 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 4077037 00:34:22.502 [2024-06-10 11:42:47.453644] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:34:22.502 [2024-06-10 11:42:47.453675] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:34:22.502 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 4077037 00:34:22.762 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:22.762 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:22.762 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:22.762 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:22.762 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:22.762 11:42:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.762 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:22.762 11:42:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.667 11:42:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:24.667 00:34:24.667 real 0m12.883s 00:34:24.667 user 0m4.522s 00:34:24.667 sys 0m7.153s 00:34:24.667 11:42:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:24.667 11:42:49 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:24.667 ************************************ 00:34:24.667 END TEST nvmf_async_init 00:34:24.667 ************************************ 00:34:24.667 11:42:49 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:34:24.667 11:42:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:24.667 11:42:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:24.667 11:42:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:24.926 ************************************ 00:34:24.926 START TEST dma 00:34:24.926 ************************************ 00:34:24.927 11:42:49 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:34:24.927 * Looking for test storage... 00:34:24.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:24.927 11:42:49 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:24.927 11:42:49 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:24.927 11:42:49 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:24.927 11:42:49 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:24.927 11:42:49 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.927 11:42:49 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.927 11:42:49 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.927 11:42:49 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:34:24.927 11:42:49 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:24.927 11:42:49 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:24.927 11:42:49 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:34:24.927 11:42:49 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:34:24.927 00:34:24.927 real 0m0.141s 00:34:24.927 user 0m0.062s 00:34:24.927 sys 0m0.089s 00:34:24.927 11:42:49 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:24.927 11:42:49 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:34:24.927 ************************************ 00:34:24.927 END TEST dma 00:34:24.927 ************************************ 00:34:24.927 11:42:49 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:34:24.927 11:42:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:24.927 11:42:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:24.927 11:42:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:25.187 ************************************ 00:34:25.187 START TEST nvmf_identify 00:34:25.187 ************************************ 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:34:25.187 * Looking for test storage... 00:34:25.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:34:25.187 11:42:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:35.176 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:35.176 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:35.176 Found net devices under 0000:af:00.0: cvl_0_0 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:35.176 Found net devices under 0000:af:00.1: cvl_0_1 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:35.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:35.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:34:35.176 00:34:35.176 --- 10.0.0.2 ping statistics --- 00:34:35.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.176 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:35.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:35.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:34:35.176 00:34:35.176 --- 10.0.0.1 ping statistics --- 00:34:35.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.176 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:35.176 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:35.177 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:35.177 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:35.177 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:35.177 11:42:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:35.177 11:42:58 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:34:35.177 11:42:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:35.177 11:42:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:35.177 11:42:58 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4081802 00:34:35.177 11:42:58 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:35.177 11:42:58 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:35.177 11:42:58 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4081802 00:34:35.177 11:42:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 4081802 ']' 00:34:35.177 11:42:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.177 11:42:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:35.177 11:42:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.177 11:42:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:35.177 11:42:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:35.177 [2024-06-10 11:42:59.052331] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:34:35.177 [2024-06-10 11:42:59.052392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.177 EAL: No free 2048 kB hugepages reported on node 1 00:34:35.177 [2024-06-10 11:42:59.186130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:35.177 [2024-06-10 11:42:59.275329] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.177 [2024-06-10 11:42:59.275376] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.177 [2024-06-10 11:42:59.275389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.177 [2024-06-10 11:42:59.275401] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.177 [2024-06-10 11:42:59.275411] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.177 [2024-06-10 11:42:59.275467] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.177 [2024-06-10 11:42:59.275495] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:35.177 [2024-06-10 11:42:59.275629] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.177 [2024-06-10 11:42:59.275629] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:34:35.177 11:42:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:35.177 11:42:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:34:35.177 11:42:59 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:35.177 11:42:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.177 11:42:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:35.177 [2024-06-10 11:42:59.970570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:35.177 11:42:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.177 11:42:59 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:34:35.177 11:42:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:35.177 11:42:59 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:35.177 Malloc0 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:35.177 [2024-06-10 11:43:00.076461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:35.177 [ 00:34:35.177 { 00:34:35.177 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:35.177 "subtype": "Discovery", 00:34:35.177 "listen_addresses": [ 00:34:35.177 { 00:34:35.177 "trtype": "TCP", 00:34:35.177 "adrfam": "IPv4", 00:34:35.177 "traddr": "10.0.0.2", 00:34:35.177 "trsvcid": "4420" 00:34:35.177 } 00:34:35.177 ], 00:34:35.177 "allow_any_host": true, 00:34:35.177 "hosts": [] 00:34:35.177 }, 00:34:35.177 { 00:34:35.177 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:35.177 "subtype": "NVMe", 00:34:35.177 "listen_addresses": [ 00:34:35.177 { 00:34:35.177 "trtype": "TCP", 00:34:35.177 "adrfam": "IPv4", 00:34:35.177 "traddr": "10.0.0.2", 00:34:35.177 "trsvcid": "4420" 00:34:35.177 } 00:34:35.177 ], 00:34:35.177 "allow_any_host": true, 00:34:35.177 "hosts": [], 00:34:35.177 "serial_number": "SPDK00000000000001", 00:34:35.177 "model_number": "SPDK bdev Controller", 00:34:35.177 "max_namespaces": 32, 00:34:35.177 "min_cntlid": 1, 00:34:35.177 "max_cntlid": 65519, 00:34:35.177 "namespaces": [ 00:34:35.177 { 00:34:35.177 "nsid": 1, 00:34:35.177 "bdev_name": "Malloc0", 00:34:35.177 "name": "Malloc0", 00:34:35.177 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:34:35.177 "eui64": "ABCDEF0123456789", 00:34:35.177 "uuid": "bdcd73e6-c1ea-4ee0-b37c-159fa885a8bc" 00:34:35.177 } 00:34:35.177 ] 00:34:35.177 } 00:34:35.177 ] 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.177 11:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:34:35.177 [2024-06-10 11:43:00.136096] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:34:35.177 [2024-06-10 11:43:00.136147] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082081 ] 00:34:35.177 EAL: No free 2048 kB hugepages reported on node 1 00:34:35.177 [2024-06-10 11:43:00.173073] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:34:35.177 [2024-06-10 11:43:00.173125] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:34:35.177 [2024-06-10 11:43:00.173133] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:34:35.177 [2024-06-10 11:43:00.173149] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:34:35.177 [2024-06-10 11:43:00.173161] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:34:35.177 [2024-06-10 11:43:00.176624] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:34:35.177 [2024-06-10 11:43:00.176662] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x21a8f00 0 00:34:35.177 [2024-06-10 11:43:00.183587] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:34:35.177 [2024-06-10 11:43:00.183605] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:34:35.177 [2024-06-10 11:43:00.183612] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:34:35.177 [2024-06-10 11:43:00.183619] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:34:35.177 [2024-06-10 11:43:00.183668] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.177 [2024-06-10 11:43:00.183680] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.177 [2024-06-10 11:43:00.183687] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21a8f00) 00:34:35.177 [2024-06-10 11:43:00.183704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:34:35.177 [2024-06-10 11:43:00.183725] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2213df0, cid 0, qid 0 00:34:35.177 [2024-06-10 11:43:00.190586] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.177 [2024-06-10 11:43:00.190599] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.177 [2024-06-10 11:43:00.190605] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.177 [2024-06-10 11:43:00.190612] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2213df0) on tqpair=0x21a8f00 00:34:35.177 [2024-06-10 11:43:00.190628] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:34:35.177 [2024-06-10 11:43:00.190637] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:34:35.178 [2024-06-10 11:43:00.190645] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:34:35.178 [2024-06-10 11:43:00.190661] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.190668] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.190675] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21a8f00) 00:34:35.178 [2024-06-10 11:43:00.190685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.178 [2024-06-10 11:43:00.190704] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2213df0, cid 0, qid 0 00:34:35.178 [2024-06-10 11:43:00.190912] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.178 [2024-06-10 11:43:00.190922] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.178 [2024-06-10 11:43:00.190928] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.190935] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2213df0) on tqpair=0x21a8f00 00:34:35.178 [2024-06-10 11:43:00.190945] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:34:35.178 [2024-06-10 11:43:00.190957] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:34:35.178 [2024-06-10 11:43:00.190967] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.190974] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.190980] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21a8f00) 00:34:35.178 [2024-06-10 11:43:00.190990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.178 [2024-06-10 11:43:00.191007] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2213df0, cid 0, qid 0 00:34:35.178 [2024-06-10 11:43:00.191162] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.178 [2024-06-10 11:43:00.191172] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.178 [2024-06-10 11:43:00.191178] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.191185] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2213df0) on tqpair=0x21a8f00 00:34:35.178 [2024-06-10 11:43:00.191194] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:34:35.178 [2024-06-10 11:43:00.191207] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:34:35.178 [2024-06-10 11:43:00.191218] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.191227] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.191233] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21a8f00) 00:34:35.178 [2024-06-10 11:43:00.191243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.178 [2024-06-10 11:43:00.191259] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2213df0, cid 0, qid 0 00:34:35.178 [2024-06-10 11:43:00.191414] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.178 [2024-06-10 11:43:00.191423] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.178 [2024-06-10 11:43:00.191429] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.191436] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2213df0) on tqpair=0x21a8f00 00:34:35.178 [2024-06-10 11:43:00.191445] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:34:35.178 [2024-06-10 11:43:00.191459] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.191466] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.191473] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21a8f00) 00:34:35.178 [2024-06-10 11:43:00.191482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.178 [2024-06-10 11:43:00.191497] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2213df0, cid 0, qid 0 00:34:35.178 [2024-06-10 11:43:00.191688] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.178 [2024-06-10 11:43:00.191697] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.178 [2024-06-10 11:43:00.191703] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.191710] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2213df0) on tqpair=0x21a8f00 00:34:35.178 [2024-06-10 11:43:00.191719] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:34:35.178 [2024-06-10 11:43:00.191727] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:34:35.178 [2024-06-10 11:43:00.191740] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:34:35.178 [2024-06-10 11:43:00.191849] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:34:35.178 [2024-06-10 11:43:00.191858] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:34:35.178 [2024-06-10 11:43:00.191869] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.191876] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.191882] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21a8f00) 00:34:35.178 [2024-06-10 11:43:00.191892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.178 [2024-06-10 11:43:00.191908] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2213df0, cid 0, qid 0 00:34:35.178 [2024-06-10 11:43:00.192018] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.178 [2024-06-10 11:43:00.192028] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.178 [2024-06-10 11:43:00.192034] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.192041] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2213df0) on tqpair=0x21a8f00 00:34:35.178 [2024-06-10 11:43:00.192050] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:34:35.178 [2024-06-10 11:43:00.192066] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.192073] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.192079] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21a8f00) 00:34:35.178 [2024-06-10 11:43:00.192089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.178 [2024-06-10 11:43:00.192104] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2213df0, cid 0, qid 0 00:34:35.178 [2024-06-10 11:43:00.192210] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.178 [2024-06-10 11:43:00.192220] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.178 [2024-06-10 11:43:00.192226] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.192233] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2213df0) on tqpair=0x21a8f00 00:34:35.178 [2024-06-10 11:43:00.192242] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:34:35.178 [2024-06-10 11:43:00.192250] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:34:35.178 [2024-06-10 11:43:00.192263] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:34:35.178 [2024-06-10 11:43:00.192282] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:34:35.178 [2024-06-10 11:43:00.192294] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.192301] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21a8f00) 00:34:35.178 [2024-06-10 11:43:00.192310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.178 [2024-06-10 11:43:00.192326] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2213df0, cid 0, qid 0 00:34:35.178 [2024-06-10 11:43:00.192471] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:35.178 [2024-06-10 11:43:00.192481] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:35.178 [2024-06-10 11:43:00.192487] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.192493] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21a8f00): datao=0, datal=4096, cccid=0 00:34:35.178 [2024-06-10 11:43:00.192502] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2213df0) on tqpair(0x21a8f00): expected_datao=0, payload_size=4096 00:34:35.178 [2024-06-10 11:43:00.192510] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.192520] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.192527] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.192585] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.178 [2024-06-10 11:43:00.192595] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.178 [2024-06-10 11:43:00.192601] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.192608] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2213df0) on tqpair=0x21a8f00 00:34:35.178 [2024-06-10 11:43:00.192620] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:34:35.178 [2024-06-10 11:43:00.192628] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:34:35.178 [2024-06-10 11:43:00.192636] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:34:35.178 [2024-06-10 11:43:00.192645] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:34:35.178 [2024-06-10 11:43:00.192656] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:34:35.178 [2024-06-10 11:43:00.192665] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:34:35.178 [2024-06-10 11:43:00.192681] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:34:35.178 [2024-06-10 11:43:00.192695] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.192702] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.178 [2024-06-10 11:43:00.192708] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21a8f00) 00:34:35.179 [2024-06-10 11:43:00.192718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:35.179 [2024-06-10 11:43:00.192736] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2213df0, cid 0, qid 0 00:34:35.179 [2024-06-10 11:43:00.192944] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.179 [2024-06-10 11:43:00.192953] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.179 [2024-06-10 11:43:00.192959] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.192966] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2213df0) on tqpair=0x21a8f00 00:34:35.179 [2024-06-10 11:43:00.192977] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.192984] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.192990] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21a8f00) 00:34:35.179 [2024-06-10 11:43:00.192999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.179 [2024-06-10 11:43:00.193009] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193015] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193022] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x21a8f00) 00:34:35.179 [2024-06-10 11:43:00.193030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.179 [2024-06-10 11:43:00.193040] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193046] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193053] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x21a8f00) 00:34:35.179 [2024-06-10 11:43:00.193061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.179 [2024-06-10 11:43:00.193071] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193077] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193084] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21a8f00) 00:34:35.179 [2024-06-10 11:43:00.193092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.179 [2024-06-10 11:43:00.193100] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:34:35.179 [2024-06-10 11:43:00.193116] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:34:35.179 [2024-06-10 11:43:00.193127] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193133] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21a8f00) 00:34:35.179 [2024-06-10 11:43:00.193143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.179 [2024-06-10 11:43:00.193162] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2213df0, cid 0, qid 0 00:34:35.179 [2024-06-10 11:43:00.193170] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2213f50, cid 1, qid 0 00:34:35.179 [2024-06-10 11:43:00.193178] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22140b0, cid 2, qid 0 00:34:35.179 [2024-06-10 11:43:00.193185] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214210, cid 3, qid 0 00:34:35.179 [2024-06-10 11:43:00.193193] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214370, cid 4, qid 0 00:34:35.179 [2024-06-10 11:43:00.193348] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.179 [2024-06-10 11:43:00.193358] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.179 [2024-06-10 11:43:00.193364] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193371] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214370) on tqpair=0x21a8f00 00:34:35.179 [2024-06-10 11:43:00.193380] nvme_ctrlr.c:2957:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:34:35.179 [2024-06-10 11:43:00.193389] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:34:35.179 [2024-06-10 11:43:00.193405] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193411] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21a8f00) 00:34:35.179 [2024-06-10 11:43:00.193421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.179 [2024-06-10 11:43:00.193436] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214370, cid 4, qid 0 00:34:35.179 [2024-06-10 11:43:00.193553] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:35.179 [2024-06-10 11:43:00.193563] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:35.179 [2024-06-10 11:43:00.193569] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193581] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21a8f00): datao=0, datal=4096, cccid=4 00:34:35.179 [2024-06-10 11:43:00.193589] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2214370) on tqpair(0x21a8f00): expected_datao=0, payload_size=4096 00:34:35.179 [2024-06-10 11:43:00.193597] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193666] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193673] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193775] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.179 [2024-06-10 11:43:00.193785] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.179 [2024-06-10 11:43:00.193791] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193797] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214370) on tqpair=0x21a8f00 00:34:35.179 [2024-06-10 11:43:00.193816] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:34:35.179 [2024-06-10 11:43:00.193848] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193856] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21a8f00) 00:34:35.179 [2024-06-10 11:43:00.193865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.179 [2024-06-10 11:43:00.193875] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193882] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.193888] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21a8f00) 00:34:35.179 [2024-06-10 11:43:00.193900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.179 [2024-06-10 11:43:00.193921] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214370, cid 4, qid 0 00:34:35.179 [2024-06-10 11:43:00.193929] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22144d0, cid 5, qid 0 00:34:35.179 [2024-06-10 11:43:00.194068] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:35.179 [2024-06-10 11:43:00.194078] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:35.179 [2024-06-10 11:43:00.194084] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.194091] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21a8f00): datao=0, datal=1024, cccid=4 00:34:35.179 [2024-06-10 11:43:00.194099] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2214370) on tqpair(0x21a8f00): expected_datao=0, payload_size=1024 00:34:35.179 [2024-06-10 11:43:00.194107] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.194116] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.194122] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.194131] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.179 [2024-06-10 11:43:00.194140] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.179 [2024-06-10 11:43:00.194146] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.194152] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22144d0) on tqpair=0x21a8f00 00:34:35.179 [2024-06-10 11:43:00.238588] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.179 [2024-06-10 11:43:00.238602] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.179 [2024-06-10 11:43:00.238608] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.238615] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214370) on tqpair=0x21a8f00 00:34:35.179 [2024-06-10 11:43:00.238638] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.238645] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21a8f00) 00:34:35.179 [2024-06-10 11:43:00.238655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.179 [2024-06-10 11:43:00.238680] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214370, cid 4, qid 0 00:34:35.179 [2024-06-10 11:43:00.238946] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:35.179 [2024-06-10 11:43:00.238955] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:35.179 [2024-06-10 11:43:00.238962] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.238968] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21a8f00): datao=0, datal=3072, cccid=4 00:34:35.179 [2024-06-10 11:43:00.238976] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2214370) on tqpair(0x21a8f00): expected_datao=0, payload_size=3072 00:34:35.179 [2024-06-10 11:43:00.238984] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.238994] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.239001] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.239108] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.179 [2024-06-10 11:43:00.239118] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.179 [2024-06-10 11:43:00.239124] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.239131] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214370) on tqpair=0x21a8f00 00:34:35.179 [2024-06-10 11:43:00.239144] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.179 [2024-06-10 11:43:00.239151] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21a8f00) 00:34:35.179 [2024-06-10 11:43:00.239163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.179 [2024-06-10 11:43:00.239185] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214370, cid 4, qid 0 00:34:35.179 [2024-06-10 11:43:00.239321] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:35.179 [2024-06-10 11:43:00.239330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:35.180 [2024-06-10 11:43:00.239337] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:35.180 [2024-06-10 11:43:00.239343] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21a8f00): datao=0, datal=8, cccid=4 00:34:35.180 [2024-06-10 11:43:00.239351] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2214370) on tqpair(0x21a8f00): expected_datao=0, payload_size=8 00:34:35.180 [2024-06-10 11:43:00.239359] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.180 [2024-06-10 11:43:00.239369] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:35.180 [2024-06-10 11:43:00.239375] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:35.442 [2024-06-10 11:43:00.279911] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.442 [2024-06-10 11:43:00.279927] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.442 [2024-06-10 11:43:00.279934] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.442 [2024-06-10 11:43:00.279941] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214370) on tqpair=0x21a8f00 00:34:35.442 ===================================================== 00:34:35.442 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:35.442 ===================================================== 00:34:35.442 Controller Capabilities/Features 00:34:35.442 ================================ 00:34:35.442 Vendor ID: 0000 00:34:35.442 Subsystem Vendor ID: 0000 00:34:35.442 Serial Number: .................... 00:34:35.442 Model Number: ........................................ 00:34:35.442 Firmware Version: 24.09 00:34:35.442 Recommended Arb Burst: 0 00:34:35.442 IEEE OUI Identifier: 00 00 00 00:34:35.442 Multi-path I/O 00:34:35.442 May have multiple subsystem ports: No 00:34:35.442 May have multiple controllers: No 00:34:35.442 Associated with SR-IOV VF: No 00:34:35.442 Max Data Transfer Size: 131072 00:34:35.442 Max Number of Namespaces: 0 00:34:35.442 Max Number of I/O Queues: 1024 00:34:35.442 NVMe Specification Version (VS): 1.3 00:34:35.442 NVMe Specification Version (Identify): 1.3 00:34:35.442 Maximum Queue Entries: 128 00:34:35.442 Contiguous Queues Required: Yes 00:34:35.442 Arbitration Mechanisms Supported 00:34:35.442 Weighted Round Robin: Not Supported 00:34:35.442 Vendor Specific: Not Supported 00:34:35.442 Reset Timeout: 15000 ms 00:34:35.442 Doorbell Stride: 4 bytes 00:34:35.442 NVM Subsystem Reset: Not Supported 00:34:35.442 Command Sets Supported 00:34:35.442 NVM Command Set: Supported 00:34:35.442 Boot Partition: Not Supported 00:34:35.442 Memory Page Size Minimum: 4096 bytes 00:34:35.442 Memory Page Size Maximum: 4096 bytes 00:34:35.443 Persistent Memory Region: Not Supported 00:34:35.443 Optional Asynchronous Events Supported 00:34:35.443 Namespace Attribute Notices: Not Supported 00:34:35.443 Firmware Activation Notices: Not Supported 00:34:35.443 ANA Change Notices: Not Supported 00:34:35.443 PLE Aggregate Log Change Notices: Not Supported 00:34:35.443 LBA Status Info Alert Notices: Not Supported 00:34:35.443 EGE Aggregate Log Change Notices: Not Supported 00:34:35.443 Normal NVM Subsystem Shutdown event: Not Supported 00:34:35.443 Zone Descriptor Change Notices: Not Supported 00:34:35.443 Discovery Log Change Notices: Supported 00:34:35.443 Controller Attributes 00:34:35.443 128-bit Host Identifier: Not Supported 00:34:35.443 Non-Operational Permissive Mode: Not Supported 00:34:35.443 NVM Sets: Not Supported 00:34:35.443 Read Recovery Levels: Not Supported 00:34:35.443 Endurance Groups: Not Supported 00:34:35.443 Predictable Latency Mode: Not Supported 00:34:35.443 Traffic Based Keep ALive: Not Supported 00:34:35.443 Namespace Granularity: Not Supported 00:34:35.443 SQ Associations: Not Supported 00:34:35.443 UUID List: Not Supported 00:34:35.443 Multi-Domain Subsystem: Not Supported 00:34:35.443 Fixed Capacity Management: Not Supported 00:34:35.443 Variable Capacity Management: Not Supported 00:34:35.443 Delete Endurance Group: Not Supported 00:34:35.443 Delete NVM Set: Not Supported 00:34:35.443 Extended LBA Formats Supported: Not Supported 00:34:35.443 Flexible Data Placement Supported: Not Supported 00:34:35.443 00:34:35.443 Controller Memory Buffer Support 00:34:35.443 ================================ 00:34:35.443 Supported: No 00:34:35.443 00:34:35.443 Persistent Memory Region Support 00:34:35.443 ================================ 00:34:35.443 Supported: No 00:34:35.443 00:34:35.443 Admin Command Set Attributes 00:34:35.443 ============================ 00:34:35.443 Security Send/Receive: Not Supported 00:34:35.443 Format NVM: Not Supported 00:34:35.443 Firmware Activate/Download: Not Supported 00:34:35.443 Namespace Management: Not Supported 00:34:35.443 Device Self-Test: Not Supported 00:34:35.443 Directives: Not Supported 00:34:35.443 NVMe-MI: Not Supported 00:34:35.443 Virtualization Management: Not Supported 00:34:35.443 Doorbell Buffer Config: Not Supported 00:34:35.443 Get LBA Status Capability: Not Supported 00:34:35.443 Command & Feature Lockdown Capability: Not Supported 00:34:35.443 Abort Command Limit: 1 00:34:35.443 Async Event Request Limit: 4 00:34:35.443 Number of Firmware Slots: N/A 00:34:35.443 Firmware Slot 1 Read-Only: N/A 00:34:35.443 Firmware Activation Without Reset: N/A 00:34:35.443 Multiple Update Detection Support: N/A 00:34:35.443 Firmware Update Granularity: No Information Provided 00:34:35.443 Per-Namespace SMART Log: No 00:34:35.443 Asymmetric Namespace Access Log Page: Not Supported 00:34:35.443 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:35.443 Command Effects Log Page: Not Supported 00:34:35.443 Get Log Page Extended Data: Supported 00:34:35.443 Telemetry Log Pages: Not Supported 00:34:35.443 Persistent Event Log Pages: Not Supported 00:34:35.443 Supported Log Pages Log Page: May Support 00:34:35.443 Commands Supported & Effects Log Page: Not Supported 00:34:35.443 Feature Identifiers & Effects Log Page:May Support 00:34:35.443 NVMe-MI Commands & Effects Log Page: May Support 00:34:35.443 Data Area 4 for Telemetry Log: Not Supported 00:34:35.443 Error Log Page Entries Supported: 128 00:34:35.443 Keep Alive: Not Supported 00:34:35.443 00:34:35.443 NVM Command Set Attributes 00:34:35.443 ========================== 00:34:35.443 Submission Queue Entry Size 00:34:35.443 Max: 1 00:34:35.443 Min: 1 00:34:35.443 Completion Queue Entry Size 00:34:35.443 Max: 1 00:34:35.443 Min: 1 00:34:35.443 Number of Namespaces: 0 00:34:35.443 Compare Command: Not Supported 00:34:35.443 Write Uncorrectable Command: Not Supported 00:34:35.443 Dataset Management Command: Not Supported 00:34:35.443 Write Zeroes Command: Not Supported 00:34:35.443 Set Features Save Field: Not Supported 00:34:35.443 Reservations: Not Supported 00:34:35.443 Timestamp: Not Supported 00:34:35.443 Copy: Not Supported 00:34:35.443 Volatile Write Cache: Not Present 00:34:35.443 Atomic Write Unit (Normal): 1 00:34:35.443 Atomic Write Unit (PFail): 1 00:34:35.443 Atomic Compare & Write Unit: 1 00:34:35.443 Fused Compare & Write: Supported 00:34:35.443 Scatter-Gather List 00:34:35.443 SGL Command Set: Supported 00:34:35.443 SGL Keyed: Supported 00:34:35.443 SGL Bit Bucket Descriptor: Not Supported 00:34:35.443 SGL Metadata Pointer: Not Supported 00:34:35.443 Oversized SGL: Not Supported 00:34:35.443 SGL Metadata Address: Not Supported 00:34:35.443 SGL Offset: Supported 00:34:35.443 Transport SGL Data Block: Not Supported 00:34:35.443 Replay Protected Memory Block: Not Supported 00:34:35.443 00:34:35.443 Firmware Slot Information 00:34:35.443 ========================= 00:34:35.443 Active slot: 0 00:34:35.443 00:34:35.443 00:34:35.443 Error Log 00:34:35.443 ========= 00:34:35.443 00:34:35.443 Active Namespaces 00:34:35.443 ================= 00:34:35.443 Discovery Log Page 00:34:35.443 ================== 00:34:35.443 Generation Counter: 2 00:34:35.443 Number of Records: 2 00:34:35.443 Record Format: 0 00:34:35.443 00:34:35.443 Discovery Log Entry 0 00:34:35.443 ---------------------- 00:34:35.443 Transport Type: 3 (TCP) 00:34:35.443 Address Family: 1 (IPv4) 00:34:35.443 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:35.443 Entry Flags: 00:34:35.443 Duplicate Returned Information: 1 00:34:35.443 Explicit Persistent Connection Support for Discovery: 1 00:34:35.443 Transport Requirements: 00:34:35.443 Secure Channel: Not Required 00:34:35.443 Port ID: 0 (0x0000) 00:34:35.443 Controller ID: 65535 (0xffff) 00:34:35.443 Admin Max SQ Size: 128 00:34:35.443 Transport Service Identifier: 4420 00:34:35.443 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:35.443 Transport Address: 10.0.0.2 00:34:35.443 Discovery Log Entry 1 00:34:35.443 ---------------------- 00:34:35.443 Transport Type: 3 (TCP) 00:34:35.443 Address Family: 1 (IPv4) 00:34:35.443 Subsystem Type: 2 (NVM Subsystem) 00:34:35.443 Entry Flags: 00:34:35.443 Duplicate Returned Information: 0 00:34:35.443 Explicit Persistent Connection Support for Discovery: 0 00:34:35.443 Transport Requirements: 00:34:35.443 Secure Channel: Not Required 00:34:35.443 Port ID: 0 (0x0000) 00:34:35.443 Controller ID: 65535 (0xffff) 00:34:35.443 Admin Max SQ Size: 128 00:34:35.443 Transport Service Identifier: 4420 00:34:35.443 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:34:35.443 Transport Address: 10.0.0.2 [2024-06-10 11:43:00.280055] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:34:35.443 [2024-06-10 11:43:00.280074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.443 [2024-06-10 11:43:00.280085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.443 [2024-06-10 11:43:00.280094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.443 [2024-06-10 11:43:00.280104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.443 [2024-06-10 11:43:00.280116] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.443 [2024-06-10 11:43:00.280123] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.443 [2024-06-10 11:43:00.280129] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21a8f00) 00:34:35.443 [2024-06-10 11:43:00.280140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.443 [2024-06-10 11:43:00.280161] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214210, cid 3, qid 0 00:34:35.443 [2024-06-10 11:43:00.280265] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.443 [2024-06-10 11:43:00.280275] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.443 [2024-06-10 11:43:00.280281] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.443 [2024-06-10 11:43:00.280288] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214210) on tqpair=0x21a8f00 00:34:35.443 [2024-06-10 11:43:00.280299] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.443 [2024-06-10 11:43:00.280306] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.443 [2024-06-10 11:43:00.280312] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21a8f00) 00:34:35.443 [2024-06-10 11:43:00.280322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.443 [2024-06-10 11:43:00.280342] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214210, cid 3, qid 0 00:34:35.443 [2024-06-10 11:43:00.280476] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.443 [2024-06-10 11:43:00.280488] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.443 [2024-06-10 11:43:00.280494] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.443 [2024-06-10 11:43:00.280501] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214210) on tqpair=0x21a8f00 00:34:35.444 [2024-06-10 11:43:00.280510] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:34:35.444 [2024-06-10 11:43:00.280518] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:34:35.444 [2024-06-10 11:43:00.280533] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.280539] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.280546] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21a8f00) 00:34:35.444 [2024-06-10 11:43:00.280555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.444 [2024-06-10 11:43:00.280571] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214210, cid 3, qid 0 00:34:35.444 [2024-06-10 11:43:00.280681] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.444 [2024-06-10 11:43:00.280691] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.444 [2024-06-10 11:43:00.280697] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.280703] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214210) on tqpair=0x21a8f00 00:34:35.444 [2024-06-10 11:43:00.280720] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.280727] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.280733] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21a8f00) 00:34:35.444 [2024-06-10 11:43:00.280743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.444 [2024-06-10 11:43:00.280758] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214210, cid 3, qid 0 00:34:35.444 [2024-06-10 11:43:00.280992] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.444 [2024-06-10 11:43:00.281001] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.444 [2024-06-10 11:43:00.281007] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.281014] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214210) on tqpair=0x21a8f00 00:34:35.444 [2024-06-10 11:43:00.281030] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.281037] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.281043] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21a8f00) 00:34:35.444 [2024-06-10 11:43:00.281052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.444 [2024-06-10 11:43:00.281068] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214210, cid 3, qid 0 00:34:35.444 [2024-06-10 11:43:00.281316] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.444 [2024-06-10 11:43:00.281325] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.444 [2024-06-10 11:43:00.281332] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.281338] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214210) on tqpair=0x21a8f00 00:34:35.444 [2024-06-10 11:43:00.281352] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.281359] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.281365] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21a8f00) 00:34:35.444 [2024-06-10 11:43:00.281375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.444 [2024-06-10 11:43:00.281395] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214210, cid 3, qid 0 00:34:35.444 [2024-06-10 11:43:00.281604] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.444 [2024-06-10 11:43:00.281614] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.444 [2024-06-10 11:43:00.281620] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.281627] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214210) on tqpair=0x21a8f00 00:34:35.444 [2024-06-10 11:43:00.281642] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.281649] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.281655] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21a8f00) 00:34:35.444 [2024-06-10 11:43:00.281665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.444 [2024-06-10 11:43:00.281680] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214210, cid 3, qid 0 00:34:35.444 [2024-06-10 11:43:00.281801] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.444 [2024-06-10 11:43:00.281811] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.444 [2024-06-10 11:43:00.281817] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.281823] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214210) on tqpair=0x21a8f00 00:34:35.444 [2024-06-10 11:43:00.281839] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.281846] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.281852] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21a8f00) 00:34:35.444 [2024-06-10 11:43:00.281861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.444 [2024-06-10 11:43:00.281876] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214210, cid 3, qid 0 00:34:35.444 [2024-06-10 11:43:00.282001] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.444 [2024-06-10 11:43:00.282010] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.444 [2024-06-10 11:43:00.282017] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.282023] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214210) on tqpair=0x21a8f00 00:34:35.444 [2024-06-10 11:43:00.282039] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.282045] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.282052] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21a8f00) 00:34:35.444 [2024-06-10 11:43:00.282061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.444 [2024-06-10 11:43:00.282076] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214210, cid 3, qid 0 00:34:35.444 [2024-06-10 11:43:00.282186] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.444 [2024-06-10 11:43:00.282195] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.444 [2024-06-10 11:43:00.282201] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.282208] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214210) on tqpair=0x21a8f00 00:34:35.444 [2024-06-10 11:43:00.282223] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.282230] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.282236] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21a8f00) 00:34:35.444 [2024-06-10 11:43:00.282246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.444 [2024-06-10 11:43:00.282261] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214210, cid 3, qid 0 00:34:35.444 [2024-06-10 11:43:00.282462] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.444 [2024-06-10 11:43:00.282471] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.444 [2024-06-10 11:43:00.282477] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.282484] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214210) on tqpair=0x21a8f00 00:34:35.444 [2024-06-10 11:43:00.282499] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.282506] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.282512] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21a8f00) 00:34:35.444 [2024-06-10 11:43:00.282522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.444 [2024-06-10 11:43:00.282538] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214210, cid 3, qid 0 00:34:35.444 [2024-06-10 11:43:00.286590] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.444 [2024-06-10 11:43:00.286604] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.444 [2024-06-10 11:43:00.286611] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.286617] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214210) on tqpair=0x21a8f00 00:34:35.444 [2024-06-10 11:43:00.286633] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.286640] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.286647] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21a8f00) 00:34:35.444 [2024-06-10 11:43:00.286656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.444 [2024-06-10 11:43:00.286674] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2214210, cid 3, qid 0 00:34:35.444 [2024-06-10 11:43:00.286882] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.444 [2024-06-10 11:43:00.286892] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.444 [2024-06-10 11:43:00.286898] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.444 [2024-06-10 11:43:00.286905] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2214210) on tqpair=0x21a8f00 00:34:35.444 [2024-06-10 11:43:00.286918] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:34:35.444 00:34:35.444 11:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:34:35.444 [2024-06-10 11:43:00.332292] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:34:35.444 [2024-06-10 11:43:00.332331] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4082087 ] 00:34:35.444 EAL: No free 2048 kB hugepages reported on node 1 00:34:35.444 [2024-06-10 11:43:00.366796] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:34:35.444 [2024-06-10 11:43:00.366847] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:34:35.444 [2024-06-10 11:43:00.366855] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:34:35.444 [2024-06-10 11:43:00.366870] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:34:35.445 [2024-06-10 11:43:00.366882] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:34:35.445 [2024-06-10 11:43:00.370619] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:34:35.445 [2024-06-10 11:43:00.370656] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a21f00 0 00:34:35.445 [2024-06-10 11:43:00.378589] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:34:35.445 [2024-06-10 11:43:00.378603] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:34:35.445 [2024-06-10 11:43:00.378610] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:34:35.445 [2024-06-10 11:43:00.378616] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:34:35.445 [2024-06-10 11:43:00.378660] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.378668] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.378675] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a21f00) 00:34:35.445 [2024-06-10 11:43:00.378689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:34:35.445 [2024-06-10 11:43:00.378710] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8cdf0, cid 0, qid 0 00:34:35.445 [2024-06-10 11:43:00.386588] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.445 [2024-06-10 11:43:00.386600] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.445 [2024-06-10 11:43:00.386607] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.386614] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8cdf0) on tqpair=0x1a21f00 00:34:35.445 [2024-06-10 11:43:00.386631] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:34:35.445 [2024-06-10 11:43:00.386641] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:34:35.445 [2024-06-10 11:43:00.386650] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:34:35.445 [2024-06-10 11:43:00.386665] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.386672] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.386678] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a21f00) 00:34:35.445 [2024-06-10 11:43:00.386689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.445 [2024-06-10 11:43:00.386708] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8cdf0, cid 0, qid 0 00:34:35.445 [2024-06-10 11:43:00.386899] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.445 [2024-06-10 11:43:00.386909] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.445 [2024-06-10 11:43:00.386915] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.386922] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8cdf0) on tqpair=0x1a21f00 00:34:35.445 [2024-06-10 11:43:00.386932] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:34:35.445 [2024-06-10 11:43:00.386944] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:34:35.445 [2024-06-10 11:43:00.386955] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.386962] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.386968] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a21f00) 00:34:35.445 [2024-06-10 11:43:00.386979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.445 [2024-06-10 11:43:00.386995] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8cdf0, cid 0, qid 0 00:34:35.445 [2024-06-10 11:43:00.387095] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.445 [2024-06-10 11:43:00.387108] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.445 [2024-06-10 11:43:00.387115] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.387121] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8cdf0) on tqpair=0x1a21f00 00:34:35.445 [2024-06-10 11:43:00.387131] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:34:35.445 [2024-06-10 11:43:00.387144] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:34:35.445 [2024-06-10 11:43:00.387155] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.387161] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.387168] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a21f00) 00:34:35.445 [2024-06-10 11:43:00.387177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.445 [2024-06-10 11:43:00.387194] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8cdf0, cid 0, qid 0 00:34:35.445 [2024-06-10 11:43:00.387289] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.445 [2024-06-10 11:43:00.387299] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.445 [2024-06-10 11:43:00.387305] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.387312] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8cdf0) on tqpair=0x1a21f00 00:34:35.445 [2024-06-10 11:43:00.387321] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:34:35.445 [2024-06-10 11:43:00.387335] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.387342] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.387349] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a21f00) 00:34:35.445 [2024-06-10 11:43:00.387358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.445 [2024-06-10 11:43:00.387374] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8cdf0, cid 0, qid 0 00:34:35.445 [2024-06-10 11:43:00.387465] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.445 [2024-06-10 11:43:00.387475] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.445 [2024-06-10 11:43:00.387481] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.387488] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8cdf0) on tqpair=0x1a21f00 00:34:35.445 [2024-06-10 11:43:00.387496] nvme_ctrlr.c:3804:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:34:35.445 [2024-06-10 11:43:00.387505] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:34:35.445 [2024-06-10 11:43:00.387517] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:34:35.445 [2024-06-10 11:43:00.387627] nvme_ctrlr.c:3997:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:34:35.445 [2024-06-10 11:43:00.387634] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:34:35.445 [2024-06-10 11:43:00.387645] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.387652] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.387658] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a21f00) 00:34:35.445 [2024-06-10 11:43:00.387668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.445 [2024-06-10 11:43:00.387687] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8cdf0, cid 0, qid 0 00:34:35.445 [2024-06-10 11:43:00.387783] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.445 [2024-06-10 11:43:00.387793] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.445 [2024-06-10 11:43:00.387799] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.387806] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8cdf0) on tqpair=0x1a21f00 00:34:35.445 [2024-06-10 11:43:00.387815] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:34:35.445 [2024-06-10 11:43:00.387829] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.387836] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.387843] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a21f00) 00:34:35.445 [2024-06-10 11:43:00.387852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.445 [2024-06-10 11:43:00.387868] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8cdf0, cid 0, qid 0 00:34:35.445 [2024-06-10 11:43:00.387963] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.445 [2024-06-10 11:43:00.387973] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.445 [2024-06-10 11:43:00.387979] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.387986] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8cdf0) on tqpair=0x1a21f00 00:34:35.445 [2024-06-10 11:43:00.387995] nvme_ctrlr.c:3839:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:34:35.445 [2024-06-10 11:43:00.388003] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:34:35.445 [2024-06-10 11:43:00.388016] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:34:35.445 [2024-06-10 11:43:00.388029] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:34:35.445 [2024-06-10 11:43:00.388042] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.388049] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a21f00) 00:34:35.445 [2024-06-10 11:43:00.388059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.445 [2024-06-10 11:43:00.388074] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8cdf0, cid 0, qid 0 00:34:35.445 [2024-06-10 11:43:00.388215] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:35.445 [2024-06-10 11:43:00.388225] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:35.445 [2024-06-10 11:43:00.388232] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.388238] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a21f00): datao=0, datal=4096, cccid=0 00:34:35.445 [2024-06-10 11:43:00.388246] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8cdf0) on tqpair(0x1a21f00): expected_datao=0, payload_size=4096 00:34:35.445 [2024-06-10 11:43:00.388255] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.388326] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.388333] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:35.445 [2024-06-10 11:43:00.432586] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.445 [2024-06-10 11:43:00.432606] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.446 [2024-06-10 11:43:00.432613] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.432623] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8cdf0) on tqpair=0x1a21f00 00:34:35.446 [2024-06-10 11:43:00.432637] nvme_ctrlr.c:2039:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:34:35.446 [2024-06-10 11:43:00.432646] nvme_ctrlr.c:2043:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:34:35.446 [2024-06-10 11:43:00.432654] nvme_ctrlr.c:2046:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:34:35.446 [2024-06-10 11:43:00.432661] nvme_ctrlr.c:2070:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:34:35.446 [2024-06-10 11:43:00.432670] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:34:35.446 [2024-06-10 11:43:00.432678] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:34:35.446 [2024-06-10 11:43:00.432696] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:34:35.446 [2024-06-10 11:43:00.432710] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.432717] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.432724] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a21f00) 00:34:35.446 [2024-06-10 11:43:00.432735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:35.446 [2024-06-10 11:43:00.432755] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8cdf0, cid 0, qid 0 00:34:35.446 [2024-06-10 11:43:00.432938] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.446 [2024-06-10 11:43:00.432948] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.446 [2024-06-10 11:43:00.432955] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.432961] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8cdf0) on tqpair=0x1a21f00 00:34:35.446 [2024-06-10 11:43:00.432973] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.432980] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.432986] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a21f00) 00:34:35.446 [2024-06-10 11:43:00.432995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.446 [2024-06-10 11:43:00.433005] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.433012] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.433018] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a21f00) 00:34:35.446 [2024-06-10 11:43:00.433027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.446 [2024-06-10 11:43:00.433037] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.433043] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.433050] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a21f00) 00:34:35.446 [2024-06-10 11:43:00.433058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.446 [2024-06-10 11:43:00.433068] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.433074] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.433081] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a21f00) 00:34:35.446 [2024-06-10 11:43:00.433089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.446 [2024-06-10 11:43:00.433101] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:34:35.446 [2024-06-10 11:43:00.433117] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:34:35.446 [2024-06-10 11:43:00.433128] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.433135] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a21f00) 00:34:35.446 [2024-06-10 11:43:00.433145] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.446 [2024-06-10 11:43:00.433163] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8cdf0, cid 0, qid 0 00:34:35.446 [2024-06-10 11:43:00.433171] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8cf50, cid 1, qid 0 00:34:35.446 [2024-06-10 11:43:00.433179] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d0b0, cid 2, qid 0 00:34:35.446 [2024-06-10 11:43:00.433187] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d210, cid 3, qid 0 00:34:35.446 [2024-06-10 11:43:00.433194] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d370, cid 4, qid 0 00:34:35.446 [2024-06-10 11:43:00.433409] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.446 [2024-06-10 11:43:00.433419] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.446 [2024-06-10 11:43:00.433425] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.433432] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d370) on tqpair=0x1a21f00 00:34:35.446 [2024-06-10 11:43:00.433441] nvme_ctrlr.c:2957:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:34:35.446 [2024-06-10 11:43:00.433450] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:34:35.446 [2024-06-10 11:43:00.433463] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:34:35.446 [2024-06-10 11:43:00.433476] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:34:35.446 [2024-06-10 11:43:00.433487] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.433494] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.433500] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a21f00) 00:34:35.446 [2024-06-10 11:43:00.433510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:35.446 [2024-06-10 11:43:00.433525] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d370, cid 4, qid 0 00:34:35.446 [2024-06-10 11:43:00.433628] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.446 [2024-06-10 11:43:00.433638] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.446 [2024-06-10 11:43:00.433645] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.433651] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d370) on tqpair=0x1a21f00 00:34:35.446 [2024-06-10 11:43:00.433713] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:34:35.446 [2024-06-10 11:43:00.433729] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:34:35.446 [2024-06-10 11:43:00.433741] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.433747] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a21f00) 00:34:35.446 [2024-06-10 11:43:00.433757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.446 [2024-06-10 11:43:00.433779] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d370, cid 4, qid 0 00:34:35.446 [2024-06-10 11:43:00.433962] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:35.446 [2024-06-10 11:43:00.433972] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:35.446 [2024-06-10 11:43:00.433978] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.433985] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a21f00): datao=0, datal=4096, cccid=4 00:34:35.446 [2024-06-10 11:43:00.433993] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8d370) on tqpair(0x1a21f00): expected_datao=0, payload_size=4096 00:34:35.446 [2024-06-10 11:43:00.434001] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.434011] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.434018] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.434088] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.446 [2024-06-10 11:43:00.434097] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.446 [2024-06-10 11:43:00.434104] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.434110] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d370) on tqpair=0x1a21f00 00:34:35.446 [2024-06-10 11:43:00.434128] nvme_ctrlr.c:4612:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:34:35.446 [2024-06-10 11:43:00.434141] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:34:35.446 [2024-06-10 11:43:00.434155] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:34:35.446 [2024-06-10 11:43:00.434166] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.434172] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a21f00) 00:34:35.446 [2024-06-10 11:43:00.434182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.446 [2024-06-10 11:43:00.434198] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d370, cid 4, qid 0 00:34:35.446 [2024-06-10 11:43:00.434319] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:35.446 [2024-06-10 11:43:00.434329] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:35.446 [2024-06-10 11:43:00.434335] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.434342] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a21f00): datao=0, datal=4096, cccid=4 00:34:35.446 [2024-06-10 11:43:00.434350] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8d370) on tqpair(0x1a21f00): expected_datao=0, payload_size=4096 00:34:35.446 [2024-06-10 11:43:00.434358] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.434368] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.434374] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:35.446 [2024-06-10 11:43:00.434442] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.446 [2024-06-10 11:43:00.434451] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.447 [2024-06-10 11:43:00.434457] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.434464] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d370) on tqpair=0x1a21f00 00:34:35.447 [2024-06-10 11:43:00.434478] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:34:35.447 [2024-06-10 11:43:00.434492] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:34:35.447 [2024-06-10 11:43:00.434506] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.434513] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a21f00) 00:34:35.447 [2024-06-10 11:43:00.434523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.447 [2024-06-10 11:43:00.434539] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d370, cid 4, qid 0 00:34:35.447 [2024-06-10 11:43:00.434702] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:35.447 [2024-06-10 11:43:00.434712] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:35.447 [2024-06-10 11:43:00.434718] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.434725] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a21f00): datao=0, datal=4096, cccid=4 00:34:35.447 [2024-06-10 11:43:00.434733] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8d370) on tqpair(0x1a21f00): expected_datao=0, payload_size=4096 00:34:35.447 [2024-06-10 11:43:00.434741] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.434791] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.434797] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.479586] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.447 [2024-06-10 11:43:00.479601] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.447 [2024-06-10 11:43:00.479607] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.479614] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d370) on tqpair=0x1a21f00 00:34:35.447 [2024-06-10 11:43:00.479628] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:34:35.447 [2024-06-10 11:43:00.479641] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:34:35.447 [2024-06-10 11:43:00.479655] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:34:35.447 [2024-06-10 11:43:00.479665] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:34:35.447 [2024-06-10 11:43:00.479673] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:34:35.447 [2024-06-10 11:43:00.479682] nvme_ctrlr.c:3045:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:34:35.447 [2024-06-10 11:43:00.479690] nvme_ctrlr.c:1539:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:34:35.447 [2024-06-10 11:43:00.479699] nvme_ctrlr.c:1545:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:34:35.447 [2024-06-10 11:43:00.479722] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.479729] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a21f00) 00:34:35.447 [2024-06-10 11:43:00.479740] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.447 [2024-06-10 11:43:00.479750] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.479757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.479763] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a21f00) 00:34:35.447 [2024-06-10 11:43:00.479773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:34:35.447 [2024-06-10 11:43:00.479793] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d370, cid 4, qid 0 00:34:35.447 [2024-06-10 11:43:00.479804] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d4d0, cid 5, qid 0 00:34:35.447 [2024-06-10 11:43:00.479921] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.447 [2024-06-10 11:43:00.479931] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.447 [2024-06-10 11:43:00.479938] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.479944] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d370) on tqpair=0x1a21f00 00:34:35.447 [2024-06-10 11:43:00.479956] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.447 [2024-06-10 11:43:00.479965] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.447 [2024-06-10 11:43:00.479971] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.479977] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d4d0) on tqpair=0x1a21f00 00:34:35.447 [2024-06-10 11:43:00.479994] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.480001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a21f00) 00:34:35.447 [2024-06-10 11:43:00.480010] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.447 [2024-06-10 11:43:00.480026] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d4d0, cid 5, qid 0 00:34:35.447 [2024-06-10 11:43:00.480197] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.447 [2024-06-10 11:43:00.480206] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.447 [2024-06-10 11:43:00.480213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.480219] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d4d0) on tqpair=0x1a21f00 00:34:35.447 [2024-06-10 11:43:00.480235] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.480242] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a21f00) 00:34:35.447 [2024-06-10 11:43:00.480252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.447 [2024-06-10 11:43:00.480267] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d4d0, cid 5, qid 0 00:34:35.447 [2024-06-10 11:43:00.480423] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.447 [2024-06-10 11:43:00.480432] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.447 [2024-06-10 11:43:00.480439] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.480445] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d4d0) on tqpair=0x1a21f00 00:34:35.447 [2024-06-10 11:43:00.480461] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.480468] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a21f00) 00:34:35.447 [2024-06-10 11:43:00.480478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.447 [2024-06-10 11:43:00.480493] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d4d0, cid 5, qid 0 00:34:35.447 [2024-06-10 11:43:00.480597] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.447 [2024-06-10 11:43:00.480608] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.447 [2024-06-10 11:43:00.480614] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.480621] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d4d0) on tqpair=0x1a21f00 00:34:35.447 [2024-06-10 11:43:00.480640] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.480647] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a21f00) 00:34:35.447 [2024-06-10 11:43:00.480656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.447 [2024-06-10 11:43:00.480670] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.480677] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a21f00) 00:34:35.447 [2024-06-10 11:43:00.480686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.447 [2024-06-10 11:43:00.480697] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.480704] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a21f00) 00:34:35.447 [2024-06-10 11:43:00.480713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.447 [2024-06-10 11:43:00.480727] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.447 [2024-06-10 11:43:00.480734] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a21f00) 00:34:35.448 [2024-06-10 11:43:00.480744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.448 [2024-06-10 11:43:00.480761] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d4d0, cid 5, qid 0 00:34:35.448 [2024-06-10 11:43:00.480769] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d370, cid 4, qid 0 00:34:35.448 [2024-06-10 11:43:00.480776] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d630, cid 6, qid 0 00:34:35.448 [2024-06-10 11:43:00.480784] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d790, cid 7, qid 0 00:34:35.448 [2024-06-10 11:43:00.480995] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:35.448 [2024-06-10 11:43:00.481004] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:35.448 [2024-06-10 11:43:00.481011] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481017] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a21f00): datao=0, datal=8192, cccid=5 00:34:35.448 [2024-06-10 11:43:00.481026] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8d4d0) on tqpair(0x1a21f00): expected_datao=0, payload_size=8192 00:34:35.448 [2024-06-10 11:43:00.481034] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481179] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481186] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481195] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:35.448 [2024-06-10 11:43:00.481204] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:35.448 [2024-06-10 11:43:00.481210] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481216] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a21f00): datao=0, datal=512, cccid=4 00:34:35.448 [2024-06-10 11:43:00.481225] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8d370) on tqpair(0x1a21f00): expected_datao=0, payload_size=512 00:34:35.448 [2024-06-10 11:43:00.481232] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481242] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481248] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481257] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:35.448 [2024-06-10 11:43:00.481266] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:35.448 [2024-06-10 11:43:00.481272] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481278] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a21f00): datao=0, datal=512, cccid=6 00:34:35.448 [2024-06-10 11:43:00.481286] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8d630) on tqpair(0x1a21f00): expected_datao=0, payload_size=512 00:34:35.448 [2024-06-10 11:43:00.481297] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481306] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481312] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481321] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:34:35.448 [2024-06-10 11:43:00.481330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:34:35.448 [2024-06-10 11:43:00.481336] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481343] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a21f00): datao=0, datal=4096, cccid=7 00:34:35.448 [2024-06-10 11:43:00.481351] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8d790) on tqpair(0x1a21f00): expected_datao=0, payload_size=4096 00:34:35.448 [2024-06-10 11:43:00.481359] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481368] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481375] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481387] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.448 [2024-06-10 11:43:00.481395] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.448 [2024-06-10 11:43:00.481402] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481408] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d4d0) on tqpair=0x1a21f00 00:34:35.448 [2024-06-10 11:43:00.481427] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.448 [2024-06-10 11:43:00.481436] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.448 [2024-06-10 11:43:00.481443] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481449] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d370) on tqpair=0x1a21f00 00:34:35.448 [2024-06-10 11:43:00.481464] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.448 [2024-06-10 11:43:00.481473] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.448 [2024-06-10 11:43:00.481479] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481486] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d630) on tqpair=0x1a21f00 00:34:35.448 [2024-06-10 11:43:00.481500] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.448 [2024-06-10 11:43:00.481509] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.448 [2024-06-10 11:43:00.481515] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.448 [2024-06-10 11:43:00.481522] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d790) on tqpair=0x1a21f00 00:34:35.448 ===================================================== 00:34:35.448 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:35.448 ===================================================== 00:34:35.448 Controller Capabilities/Features 00:34:35.448 ================================ 00:34:35.448 Vendor ID: 8086 00:34:35.448 Subsystem Vendor ID: 8086 00:34:35.448 Serial Number: SPDK00000000000001 00:34:35.448 Model Number: SPDK bdev Controller 00:34:35.448 Firmware Version: 24.09 00:34:35.448 Recommended Arb Burst: 6 00:34:35.448 IEEE OUI Identifier: e4 d2 5c 00:34:35.448 Multi-path I/O 00:34:35.448 May have multiple subsystem ports: Yes 00:34:35.448 May have multiple controllers: Yes 00:34:35.448 Associated with SR-IOV VF: No 00:34:35.448 Max Data Transfer Size: 131072 00:34:35.448 Max Number of Namespaces: 32 00:34:35.448 Max Number of I/O Queues: 127 00:34:35.448 NVMe Specification Version (VS): 1.3 00:34:35.448 NVMe Specification Version (Identify): 1.3 00:34:35.448 Maximum Queue Entries: 128 00:34:35.448 Contiguous Queues Required: Yes 00:34:35.448 Arbitration Mechanisms Supported 00:34:35.448 Weighted Round Robin: Not Supported 00:34:35.448 Vendor Specific: Not Supported 00:34:35.448 Reset Timeout: 15000 ms 00:34:35.448 Doorbell Stride: 4 bytes 00:34:35.448 NVM Subsystem Reset: Not Supported 00:34:35.448 Command Sets Supported 00:34:35.448 NVM Command Set: Supported 00:34:35.448 Boot Partition: Not Supported 00:34:35.448 Memory Page Size Minimum: 4096 bytes 00:34:35.448 Memory Page Size Maximum: 4096 bytes 00:34:35.448 Persistent Memory Region: Not Supported 00:34:35.448 Optional Asynchronous Events Supported 00:34:35.448 Namespace Attribute Notices: Supported 00:34:35.448 Firmware Activation Notices: Not Supported 00:34:35.448 ANA Change Notices: Not Supported 00:34:35.448 PLE Aggregate Log Change Notices: Not Supported 00:34:35.448 LBA Status Info Alert Notices: Not Supported 00:34:35.448 EGE Aggregate Log Change Notices: Not Supported 00:34:35.448 Normal NVM Subsystem Shutdown event: Not Supported 00:34:35.448 Zone Descriptor Change Notices: Not Supported 00:34:35.448 Discovery Log Change Notices: Not Supported 00:34:35.448 Controller Attributes 00:34:35.448 128-bit Host Identifier: Supported 00:34:35.448 Non-Operational Permissive Mode: Not Supported 00:34:35.448 NVM Sets: Not Supported 00:34:35.448 Read Recovery Levels: Not Supported 00:34:35.448 Endurance Groups: Not Supported 00:34:35.448 Predictable Latency Mode: Not Supported 00:34:35.448 Traffic Based Keep ALive: Not Supported 00:34:35.448 Namespace Granularity: Not Supported 00:34:35.448 SQ Associations: Not Supported 00:34:35.448 UUID List: Not Supported 00:34:35.448 Multi-Domain Subsystem: Not Supported 00:34:35.448 Fixed Capacity Management: Not Supported 00:34:35.448 Variable Capacity Management: Not Supported 00:34:35.448 Delete Endurance Group: Not Supported 00:34:35.448 Delete NVM Set: Not Supported 00:34:35.448 Extended LBA Formats Supported: Not Supported 00:34:35.448 Flexible Data Placement Supported: Not Supported 00:34:35.448 00:34:35.448 Controller Memory Buffer Support 00:34:35.448 ================================ 00:34:35.448 Supported: No 00:34:35.448 00:34:35.448 Persistent Memory Region Support 00:34:35.448 ================================ 00:34:35.448 Supported: No 00:34:35.448 00:34:35.448 Admin Command Set Attributes 00:34:35.448 ============================ 00:34:35.448 Security Send/Receive: Not Supported 00:34:35.448 Format NVM: Not Supported 00:34:35.448 Firmware Activate/Download: Not Supported 00:34:35.448 Namespace Management: Not Supported 00:34:35.448 Device Self-Test: Not Supported 00:34:35.448 Directives: Not Supported 00:34:35.448 NVMe-MI: Not Supported 00:34:35.448 Virtualization Management: Not Supported 00:34:35.448 Doorbell Buffer Config: Not Supported 00:34:35.448 Get LBA Status Capability: Not Supported 00:34:35.448 Command & Feature Lockdown Capability: Not Supported 00:34:35.448 Abort Command Limit: 4 00:34:35.448 Async Event Request Limit: 4 00:34:35.448 Number of Firmware Slots: N/A 00:34:35.448 Firmware Slot 1 Read-Only: N/A 00:34:35.448 Firmware Activation Without Reset: N/A 00:34:35.448 Multiple Update Detection Support: N/A 00:34:35.448 Firmware Update Granularity: No Information Provided 00:34:35.448 Per-Namespace SMART Log: No 00:34:35.448 Asymmetric Namespace Access Log Page: Not Supported 00:34:35.448 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:34:35.448 Command Effects Log Page: Supported 00:34:35.448 Get Log Page Extended Data: Supported 00:34:35.448 Telemetry Log Pages: Not Supported 00:34:35.449 Persistent Event Log Pages: Not Supported 00:34:35.449 Supported Log Pages Log Page: May Support 00:34:35.449 Commands Supported & Effects Log Page: Not Supported 00:34:35.449 Feature Identifiers & Effects Log Page:May Support 00:34:35.449 NVMe-MI Commands & Effects Log Page: May Support 00:34:35.449 Data Area 4 for Telemetry Log: Not Supported 00:34:35.449 Error Log Page Entries Supported: 128 00:34:35.449 Keep Alive: Supported 00:34:35.449 Keep Alive Granularity: 10000 ms 00:34:35.449 00:34:35.449 NVM Command Set Attributes 00:34:35.449 ========================== 00:34:35.449 Submission Queue Entry Size 00:34:35.449 Max: 64 00:34:35.449 Min: 64 00:34:35.449 Completion Queue Entry Size 00:34:35.449 Max: 16 00:34:35.449 Min: 16 00:34:35.449 Number of Namespaces: 32 00:34:35.449 Compare Command: Supported 00:34:35.449 Write Uncorrectable Command: Not Supported 00:34:35.449 Dataset Management Command: Supported 00:34:35.449 Write Zeroes Command: Supported 00:34:35.449 Set Features Save Field: Not Supported 00:34:35.449 Reservations: Supported 00:34:35.449 Timestamp: Not Supported 00:34:35.449 Copy: Supported 00:34:35.449 Volatile Write Cache: Present 00:34:35.449 Atomic Write Unit (Normal): 1 00:34:35.449 Atomic Write Unit (PFail): 1 00:34:35.449 Atomic Compare & Write Unit: 1 00:34:35.449 Fused Compare & Write: Supported 00:34:35.449 Scatter-Gather List 00:34:35.449 SGL Command Set: Supported 00:34:35.449 SGL Keyed: Supported 00:34:35.449 SGL Bit Bucket Descriptor: Not Supported 00:34:35.449 SGL Metadata Pointer: Not Supported 00:34:35.449 Oversized SGL: Not Supported 00:34:35.449 SGL Metadata Address: Not Supported 00:34:35.449 SGL Offset: Supported 00:34:35.449 Transport SGL Data Block: Not Supported 00:34:35.449 Replay Protected Memory Block: Not Supported 00:34:35.449 00:34:35.449 Firmware Slot Information 00:34:35.449 ========================= 00:34:35.449 Active slot: 1 00:34:35.449 Slot 1 Firmware Revision: 24.09 00:34:35.449 00:34:35.449 00:34:35.449 Commands Supported and Effects 00:34:35.449 ============================== 00:34:35.449 Admin Commands 00:34:35.449 -------------- 00:34:35.449 Get Log Page (02h): Supported 00:34:35.449 Identify (06h): Supported 00:34:35.449 Abort (08h): Supported 00:34:35.449 Set Features (09h): Supported 00:34:35.449 Get Features (0Ah): Supported 00:34:35.449 Asynchronous Event Request (0Ch): Supported 00:34:35.449 Keep Alive (18h): Supported 00:34:35.449 I/O Commands 00:34:35.449 ------------ 00:34:35.449 Flush (00h): Supported LBA-Change 00:34:35.449 Write (01h): Supported LBA-Change 00:34:35.449 Read (02h): Supported 00:34:35.449 Compare (05h): Supported 00:34:35.449 Write Zeroes (08h): Supported LBA-Change 00:34:35.449 Dataset Management (09h): Supported LBA-Change 00:34:35.449 Copy (19h): Supported LBA-Change 00:34:35.449 Unknown (79h): Supported LBA-Change 00:34:35.449 Unknown (7Ah): Supported 00:34:35.449 00:34:35.449 Error Log 00:34:35.449 ========= 00:34:35.449 00:34:35.449 Arbitration 00:34:35.449 =========== 00:34:35.449 Arbitration Burst: 1 00:34:35.449 00:34:35.449 Power Management 00:34:35.449 ================ 00:34:35.449 Number of Power States: 1 00:34:35.449 Current Power State: Power State #0 00:34:35.449 Power State #0: 00:34:35.449 Max Power: 0.00 W 00:34:35.449 Non-Operational State: Operational 00:34:35.449 Entry Latency: Not Reported 00:34:35.449 Exit Latency: Not Reported 00:34:35.449 Relative Read Throughput: 0 00:34:35.449 Relative Read Latency: 0 00:34:35.449 Relative Write Throughput: 0 00:34:35.449 Relative Write Latency: 0 00:34:35.449 Idle Power: Not Reported 00:34:35.449 Active Power: Not Reported 00:34:35.449 Non-Operational Permissive Mode: Not Supported 00:34:35.449 00:34:35.449 Health Information 00:34:35.449 ================== 00:34:35.449 Critical Warnings: 00:34:35.449 Available Spare Space: OK 00:34:35.449 Temperature: OK 00:34:35.449 Device Reliability: OK 00:34:35.449 Read Only: No 00:34:35.449 Volatile Memory Backup: OK 00:34:35.449 Current Temperature: 0 Kelvin (-273 Celsius) 00:34:35.449 Temperature Threshold: [2024-06-10 11:43:00.481649] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.481657] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a21f00) 00:34:35.449 [2024-06-10 11:43:00.481667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.449 [2024-06-10 11:43:00.481684] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d790, cid 7, qid 0 00:34:35.449 [2024-06-10 11:43:00.481799] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.449 [2024-06-10 11:43:00.481808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.449 [2024-06-10 11:43:00.481815] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.481822] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d790) on tqpair=0x1a21f00 00:34:35.449 [2024-06-10 11:43:00.481862] nvme_ctrlr.c:4276:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:34:35.449 [2024-06-10 11:43:00.481879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.449 [2024-06-10 11:43:00.481890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.449 [2024-06-10 11:43:00.481902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.449 [2024-06-10 11:43:00.481912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:35.449 [2024-06-10 11:43:00.481924] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.481931] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.481937] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a21f00) 00:34:35.449 [2024-06-10 11:43:00.481947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.449 [2024-06-10 11:43:00.481964] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d210, cid 3, qid 0 00:34:35.449 [2024-06-10 11:43:00.482064] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.449 [2024-06-10 11:43:00.482074] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.449 [2024-06-10 11:43:00.482080] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.482087] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d210) on tqpair=0x1a21f00 00:34:35.449 [2024-06-10 11:43:00.482098] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.482105] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.482111] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a21f00) 00:34:35.449 [2024-06-10 11:43:00.482121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.449 [2024-06-10 11:43:00.482140] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d210, cid 3, qid 0 00:34:35.449 [2024-06-10 11:43:00.482262] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.449 [2024-06-10 11:43:00.482272] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.449 [2024-06-10 11:43:00.482278] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.482285] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d210) on tqpair=0x1a21f00 00:34:35.449 [2024-06-10 11:43:00.482294] nvme_ctrlr.c:1137:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:34:35.449 [2024-06-10 11:43:00.482302] nvme_ctrlr.c:1140:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:34:35.449 [2024-06-10 11:43:00.482316] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.482323] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.482330] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a21f00) 00:34:35.449 [2024-06-10 11:43:00.482339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.449 [2024-06-10 11:43:00.482355] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d210, cid 3, qid 0 00:34:35.449 [2024-06-10 11:43:00.482450] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.449 [2024-06-10 11:43:00.482459] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.449 [2024-06-10 11:43:00.482466] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.482472] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d210) on tqpair=0x1a21f00 00:34:35.449 [2024-06-10 11:43:00.482488] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.482495] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.482501] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a21f00) 00:34:35.449 [2024-06-10 11:43:00.482511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.449 [2024-06-10 11:43:00.482529] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d210, cid 3, qid 0 00:34:35.449 [2024-06-10 11:43:00.482631] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.449 [2024-06-10 11:43:00.482641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.449 [2024-06-10 11:43:00.482647] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.482654] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d210) on tqpair=0x1a21f00 00:34:35.449 [2024-06-10 11:43:00.482669] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.449 [2024-06-10 11:43:00.482676] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.482683] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a21f00) 00:34:35.450 [2024-06-10 11:43:00.482692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.450 [2024-06-10 11:43:00.482708] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d210, cid 3, qid 0 00:34:35.450 [2024-06-10 11:43:00.482803] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.450 [2024-06-10 11:43:00.482813] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.450 [2024-06-10 11:43:00.482819] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.482826] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d210) on tqpair=0x1a21f00 00:34:35.450 [2024-06-10 11:43:00.482841] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.482848] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.482855] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a21f00) 00:34:35.450 [2024-06-10 11:43:00.482864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.450 [2024-06-10 11:43:00.482879] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d210, cid 3, qid 0 00:34:35.450 [2024-06-10 11:43:00.482975] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.450 [2024-06-10 11:43:00.482984] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.450 [2024-06-10 11:43:00.482991] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.482997] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d210) on tqpair=0x1a21f00 00:34:35.450 [2024-06-10 11:43:00.483013] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.483019] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.483026] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a21f00) 00:34:35.450 [2024-06-10 11:43:00.483035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.450 [2024-06-10 11:43:00.483051] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d210, cid 3, qid 0 00:34:35.450 [2024-06-10 11:43:00.483146] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.450 [2024-06-10 11:43:00.483156] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.450 [2024-06-10 11:43:00.483162] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.483168] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d210) on tqpair=0x1a21f00 00:34:35.450 [2024-06-10 11:43:00.483184] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.483191] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.483197] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a21f00) 00:34:35.450 [2024-06-10 11:43:00.483207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.450 [2024-06-10 11:43:00.483224] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d210, cid 3, qid 0 00:34:35.450 [2024-06-10 11:43:00.483320] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.450 [2024-06-10 11:43:00.483329] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.450 [2024-06-10 11:43:00.483335] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.483342] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d210) on tqpair=0x1a21f00 00:34:35.450 [2024-06-10 11:43:00.483358] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.483365] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.483371] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a21f00) 00:34:35.450 [2024-06-10 11:43:00.483381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.450 [2024-06-10 11:43:00.483396] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d210, cid 3, qid 0 00:34:35.450 [2024-06-10 11:43:00.483491] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.450 [2024-06-10 11:43:00.483500] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.450 [2024-06-10 11:43:00.483506] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.483513] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d210) on tqpair=0x1a21f00 00:34:35.450 [2024-06-10 11:43:00.483528] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.483535] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.483542] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a21f00) 00:34:35.450 [2024-06-10 11:43:00.483551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.450 [2024-06-10 11:43:00.483566] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d210, cid 3, qid 0 00:34:35.450 [2024-06-10 11:43:00.487586] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.450 [2024-06-10 11:43:00.487598] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.450 [2024-06-10 11:43:00.487604] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.487611] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d210) on tqpair=0x1a21f00 00:34:35.450 [2024-06-10 11:43:00.487627] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.487634] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.487641] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a21f00) 00:34:35.450 [2024-06-10 11:43:00.487651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:35.450 [2024-06-10 11:43:00.487667] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8d210, cid 3, qid 0 00:34:35.450 [2024-06-10 11:43:00.487845] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:34:35.450 [2024-06-10 11:43:00.487855] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:34:35.450 [2024-06-10 11:43:00.487861] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:34:35.450 [2024-06-10 11:43:00.487868] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a8d210) on tqpair=0x1a21f00 00:34:35.450 [2024-06-10 11:43:00.487881] nvme_ctrlr.c:1259:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:34:35.450 0 Kelvin (-273 Celsius) 00:34:35.450 Available Spare: 0% 00:34:35.450 Available Spare Threshold: 0% 00:34:35.450 Life Percentage Used: 0% 00:34:35.450 Data Units Read: 0 00:34:35.450 Data Units Written: 0 00:34:35.450 Host Read Commands: 0 00:34:35.450 Host Write Commands: 0 00:34:35.450 Controller Busy Time: 0 minutes 00:34:35.450 Power Cycles: 0 00:34:35.450 Power On Hours: 0 hours 00:34:35.450 Unsafe Shutdowns: 0 00:34:35.450 Unrecoverable Media Errors: 0 00:34:35.450 Lifetime Error Log Entries: 0 00:34:35.450 Warning Temperature Time: 0 minutes 00:34:35.450 Critical Temperature Time: 0 minutes 00:34:35.450 00:34:35.450 Number of Queues 00:34:35.450 ================ 00:34:35.450 Number of I/O Submission Queues: 127 00:34:35.450 Number of I/O Completion Queues: 127 00:34:35.450 00:34:35.450 Active Namespaces 00:34:35.450 ================= 00:34:35.450 Namespace ID:1 00:34:35.450 Error Recovery Timeout: Unlimited 00:34:35.450 Command Set Identifier: NVM (00h) 00:34:35.450 Deallocate: Supported 00:34:35.450 Deallocated/Unwritten Error: Not Supported 00:34:35.450 Deallocated Read Value: Unknown 00:34:35.450 Deallocate in Write Zeroes: Not Supported 00:34:35.450 Deallocated Guard Field: 0xFFFF 00:34:35.450 Flush: Supported 00:34:35.450 Reservation: Supported 00:34:35.450 Namespace Sharing Capabilities: Multiple Controllers 00:34:35.450 Size (in LBAs): 131072 (0GiB) 00:34:35.450 Capacity (in LBAs): 131072 (0GiB) 00:34:35.450 Utilization (in LBAs): 131072 (0GiB) 00:34:35.450 NGUID: ABCDEF0123456789ABCDEF0123456789 00:34:35.450 EUI64: ABCDEF0123456789 00:34:35.450 UUID: bdcd73e6-c1ea-4ee0-b37c-159fa885a8bc 00:34:35.450 Thin Provisioning: Not Supported 00:34:35.450 Per-NS Atomic Units: Yes 00:34:35.450 Atomic Boundary Size (Normal): 0 00:34:35.450 Atomic Boundary Size (PFail): 0 00:34:35.450 Atomic Boundary Offset: 0 00:34:35.450 Maximum Single Source Range Length: 65535 00:34:35.450 Maximum Copy Length: 65535 00:34:35.450 Maximum Source Range Count: 1 00:34:35.450 NGUID/EUI64 Never Reused: No 00:34:35.450 Namespace Write Protected: No 00:34:35.450 Number of LBA Formats: 1 00:34:35.450 Current LBA Format: LBA Format #00 00:34:35.450 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:35.450 00:34:35.450 11:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:34:35.450 11:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:35.450 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:35.450 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:35.450 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:35.450 11:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:34:35.450 11:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:34:35.450 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:35.450 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:34:35.450 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:35.450 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:34:35.450 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:35.450 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:35.450 rmmod nvme_tcp 00:34:35.710 rmmod nvme_fabrics 00:34:35.710 rmmod nvme_keyring 00:34:35.710 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:35.710 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:34:35.710 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:34:35.711 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 4081802 ']' 00:34:35.711 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 4081802 00:34:35.711 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 4081802 ']' 00:34:35.711 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 4081802 00:34:35.711 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:34:35.711 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:35.711 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4081802 00:34:35.711 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:35.711 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:35.711 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4081802' 00:34:35.711 killing process with pid 4081802 00:34:35.711 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 4081802 00:34:35.711 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 4081802 00:34:35.992 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:35.992 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:35.992 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:35.992 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:35.992 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:35.992 11:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.992 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:35.992 11:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.896 11:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:37.896 00:34:37.896 real 0m12.922s 00:34:37.896 user 0m8.877s 00:34:37.896 sys 0m7.254s 00:34:37.896 11:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:37.896 11:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:37.896 ************************************ 00:34:37.896 END TEST nvmf_identify 00:34:37.896 ************************************ 00:34:38.155 11:43:03 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:34:38.155 11:43:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:38.155 11:43:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:38.155 11:43:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:38.155 ************************************ 00:34:38.155 START TEST nvmf_perf 00:34:38.155 ************************************ 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:34:38.155 * Looking for test storage... 00:34:38.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:38.155 11:43:03 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:38.156 11:43:03 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:34:38.156 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:38.156 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:38.156 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:38.156 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:38.156 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:38.156 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:38.156 11:43:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:38.156 11:43:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:38.156 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:38.156 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:38.156 11:43:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:38.156 11:43:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:46.275 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:46.276 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:46.276 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:46.276 Found net devices under 0000:af:00.0: cvl_0_0 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:46.276 Found net devices under 0000:af:00.1: cvl_0_1 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:46.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:46.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:34:46.276 00:34:46.276 --- 10.0.0.2 ping statistics --- 00:34:46.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.276 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:34:46.276 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:46.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:46.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:34:46.534 00:34:46.534 --- 10.0.0.1 ping statistics --- 00:34:46.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.534 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=4086515 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 4086515 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 4086515 ']' 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:46.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:46.534 11:43:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:46.534 [2024-06-10 11:43:11.487371] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:34:46.534 [2024-06-10 11:43:11.487431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:46.534 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.534 [2024-06-10 11:43:11.606913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:46.792 [2024-06-10 11:43:11.693830] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:46.792 [2024-06-10 11:43:11.693874] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:46.792 [2024-06-10 11:43:11.693888] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:46.792 [2024-06-10 11:43:11.693900] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:46.792 [2024-06-10 11:43:11.693910] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:46.792 [2024-06-10 11:43:11.697598] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.792 [2024-06-10 11:43:11.697619] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:46.792 [2024-06-10 11:43:11.697730] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:34:46.792 [2024-06-10 11:43:11.697732] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.356 11:43:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:47.356 11:43:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:34:47.356 11:43:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:47.356 11:43:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:47.356 11:43:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:47.356 11:43:12 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:47.356 11:43:12 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:47.356 11:43:12 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:34:50.638 11:43:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:34:50.638 11:43:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:34:50.896 11:43:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:34:50.896 11:43:15 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:51.154 11:43:16 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:34:51.154 11:43:16 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:34:51.154 11:43:16 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:34:51.154 11:43:16 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:34:51.154 11:43:16 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:34:51.412 [2024-06-10 11:43:16.265942] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:51.412 11:43:16 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:51.669 11:43:16 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:34:51.669 11:43:16 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:51.927 11:43:16 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:34:51.927 11:43:16 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:51.927 11:43:17 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:52.184 [2024-06-10 11:43:17.229681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:52.184 11:43:17 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:52.442 11:43:17 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:34:52.442 11:43:17 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:34:52.442 11:43:17 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:34:52.442 11:43:17 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:34:53.832 Initializing NVMe Controllers 00:34:53.832 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:34:53.832 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:34:53.832 Initialization complete. Launching workers. 00:34:53.832 ======================================================== 00:34:53.832 Latency(us) 00:34:53.832 Device Information : IOPS MiB/s Average min max 00:34:53.832 PCIE (0000:d8:00.0) NSID 1 from core 0: 77293.42 301.93 413.44 55.87 7276.61 00:34:53.832 ======================================================== 00:34:53.832 Total : 77293.42 301.93 413.44 55.87 7276.61 00:34:53.832 00:34:53.832 11:43:18 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:53.832 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.207 Initializing NVMe Controllers 00:34:55.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:55.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:55.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:55.207 Initialization complete. Launching workers. 00:34:55.207 ======================================================== 00:34:55.207 Latency(us) 00:34:55.207 Device Information : IOPS MiB/s Average min max 00:34:55.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.80 0.32 12376.85 228.22 44739.30 00:34:55.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 58.85 0.23 17397.43 4987.31 47902.08 00:34:55.207 ======================================================== 00:34:55.207 Total : 139.65 0.55 14492.67 228.22 47902.08 00:34:55.207 00:34:55.207 11:43:20 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:55.207 EAL: No free 2048 kB hugepages reported on node 1 00:34:56.582 Initializing NVMe Controllers 00:34:56.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:56.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:56.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:56.582 Initialization complete. Launching workers. 00:34:56.582 ======================================================== 00:34:56.582 Latency(us) 00:34:56.582 Device Information : IOPS MiB/s Average min max 00:34:56.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8273.51 32.32 3867.20 656.25 9737.53 00:34:56.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3792.94 14.82 8438.08 4724.91 20068.08 00:34:56.582 ======================================================== 00:34:56.582 Total : 12066.45 47.13 5304.00 656.25 20068.08 00:34:56.582 00:34:56.582 11:43:21 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:34:56.582 11:43:21 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:34:56.582 11:43:21 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:56.582 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.169 Initializing NVMe Controllers 00:34:59.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:59.169 Controller IO queue size 128, less than required. 00:34:59.169 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:59.169 Controller IO queue size 128, less than required. 00:34:59.169 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:59.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:59.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:59.169 Initialization complete. Launching workers. 00:34:59.169 ======================================================== 00:34:59.169 Latency(us) 00:34:59.169 Device Information : IOPS MiB/s Average min max 00:34:59.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 925.40 231.35 143843.63 80018.00 207626.53 00:34:59.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.80 151.70 216208.11 76844.82 352255.41 00:34:59.169 ======================================================== 00:34:59.169 Total : 1532.20 383.05 172502.13 76844.82 352255.41 00:34:59.169 00:34:59.169 11:43:23 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:34:59.169 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.169 No valid NVMe controllers or AIO or URING devices found 00:34:59.169 Initializing NVMe Controllers 00:34:59.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:59.169 Controller IO queue size 128, less than required. 00:34:59.169 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:59.169 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:34:59.169 Controller IO queue size 128, less than required. 00:34:59.169 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:59.169 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:34:59.169 WARNING: Some requested NVMe devices were skipped 00:34:59.169 11:43:24 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:34:59.169 EAL: No free 2048 kB hugepages reported on node 1 00:35:01.701 Initializing NVMe Controllers 00:35:01.701 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:01.701 Controller IO queue size 128, less than required. 00:35:01.701 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:01.701 Controller IO queue size 128, less than required. 00:35:01.701 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:01.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:01.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:01.701 Initialization complete. Launching workers. 00:35:01.701 00:35:01.701 ==================== 00:35:01.701 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:35:01.702 TCP transport: 00:35:01.702 polls: 22341 00:35:01.702 idle_polls: 7053 00:35:01.702 sock_completions: 15288 00:35:01.702 nvme_completions: 4489 00:35:01.702 submitted_requests: 6838 00:35:01.702 queued_requests: 1 00:35:01.702 00:35:01.702 ==================== 00:35:01.702 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:35:01.702 TCP transport: 00:35:01.702 polls: 24116 00:35:01.702 idle_polls: 7046 00:35:01.702 sock_completions: 17070 00:35:01.702 nvme_completions: 3833 00:35:01.702 submitted_requests: 5746 00:35:01.702 queued_requests: 1 00:35:01.702 ======================================================== 00:35:01.702 Latency(us) 00:35:01.702 Device Information : IOPS MiB/s Average min max 00:35:01.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1120.17 280.04 117430.33 67749.84 223599.48 00:35:01.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 956.43 239.11 137143.61 56326.92 191639.43 00:35:01.702 ======================================================== 00:35:01.702 Total : 2076.60 519.15 126509.81 56326.92 223599.48 00:35:01.702 00:35:01.702 11:43:26 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:35:01.702 11:43:26 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:01.960 11:43:26 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:35:01.960 11:43:26 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:35:01.960 11:43:26 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:35:01.960 11:43:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:01.960 11:43:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:35:01.960 11:43:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:01.960 11:43:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:35:01.960 11:43:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:01.960 11:43:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:01.960 rmmod nvme_tcp 00:35:01.960 rmmod nvme_fabrics 00:35:01.960 rmmod nvme_keyring 00:35:01.960 11:43:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:01.960 11:43:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:35:01.960 11:43:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:35:01.960 11:43:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 4086515 ']' 00:35:01.960 11:43:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 4086515 00:35:01.960 11:43:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 4086515 ']' 00:35:01.960 11:43:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 4086515 00:35:01.960 11:43:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:35:01.960 11:43:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:01.960 11:43:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4086515 00:35:02.218 11:43:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:02.218 11:43:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:02.219 11:43:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4086515' 00:35:02.219 killing process with pid 4086515 00:35:02.219 11:43:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 4086515 00:35:02.219 11:43:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 4086515 00:35:04.119 11:43:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:04.119 11:43:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:04.119 11:43:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:04.119 11:43:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:04.119 11:43:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:04.119 11:43:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.119 11:43:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:04.119 11:43:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.657 11:43:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:06.657 00:35:06.657 real 0m28.252s 00:35:06.657 user 1m10.821s 00:35:06.657 sys 0m10.126s 00:35:06.657 11:43:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:06.657 11:43:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:35:06.657 ************************************ 00:35:06.657 END TEST nvmf_perf 00:35:06.657 ************************************ 00:35:06.657 11:43:31 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:35:06.657 11:43:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:06.657 11:43:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:06.657 11:43:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:06.657 ************************************ 00:35:06.657 START TEST nvmf_fio_host 00:35:06.657 ************************************ 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:35:06.657 * Looking for test storage... 00:35:06.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.657 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:35:06.658 11:43:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:14.791 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:14.792 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:14.792 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:14.792 Found net devices under 0000:af:00.0: cvl_0_0 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:14.792 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:15.052 Found net devices under 0000:af:00.1: cvl_0_1 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:15.052 11:43:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:15.052 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:15.052 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:15.052 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:15.052 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:15.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:15.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:35:15.311 00:35:15.311 --- 10.0.0.2 ping statistics --- 00:35:15.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.311 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:15.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:15.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:35:15.311 00:35:15.311 --- 10.0.0.1 ping statistics --- 00:35:15.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.311 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4093916 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4093916 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 4093916 ']' 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:15.311 11:43:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.311 [2024-06-10 11:43:40.305745] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:35:15.311 [2024-06-10 11:43:40.305807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:15.311 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.570 [2024-06-10 11:43:40.435862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:15.570 [2024-06-10 11:43:40.518456] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:15.570 [2024-06-10 11:43:40.518504] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:15.571 [2024-06-10 11:43:40.518517] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:15.571 [2024-06-10 11:43:40.518529] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:15.571 [2024-06-10 11:43:40.518539] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:15.571 [2024-06-10 11:43:40.518602] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.571 [2024-06-10 11:43:40.518652] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:15.571 [2024-06-10 11:43:40.518745] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.571 [2024-06-10 11:43:40.518745] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:35:16.138 11:43:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:16.138 11:43:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:35:16.138 11:43:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:16.397 [2024-06-10 11:43:41.426469] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.397 11:43:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:35:16.397 11:43:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:16.397 11:43:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.655 11:43:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:35:16.655 Malloc1 00:35:16.655 11:43:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:16.914 11:43:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:17.178 11:43:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:17.439 [2024-06-10 11:43:42.437732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.439 11:43:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:17.697 11:43:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:17.697 11:43:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:17.697 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:17.697 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:35:17.697 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:17.697 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:35:17.697 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:17.697 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:35:17.697 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:35:17.697 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:35:17.697 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:17.697 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:35:17.698 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:35:17.698 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:35:17.698 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:35:17.698 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:35:17.698 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:17.698 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:35:17.698 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:35:17.698 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:35:17.698 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:35:17.698 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:35:17.698 11:43:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:18.266 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:35:18.266 fio-3.35 00:35:18.266 Starting 1 thread 00:35:18.266 EAL: No free 2048 kB hugepages reported on node 1 00:35:20.794 00:35:20.794 test: (groupid=0, jobs=1): err= 0: pid=4094605: Mon Jun 10 11:43:45 2024 00:35:20.794 read: IOPS=9047, BW=35.3MiB/s (37.1MB/s)(70.9MiB/2007msec) 00:35:20.794 slat (nsec): min=1521, max=254446, avg=1669.20, stdev=2630.36 00:35:20.794 clat (usec): min=3947, max=12598, avg=7803.21, stdev=602.75 00:35:20.794 lat (usec): min=3983, max=12599, avg=7804.88, stdev=602.54 00:35:20.794 clat percentiles (usec): 00:35:20.794 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:35:20.794 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 7963], 00:35:20.794 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:35:20.794 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[10683], 99.95th=[11600], 00:35:20.794 | 99.99th=[12649] 00:35:20.794 bw ( KiB/s): min=35409, max=36664, per=99.91%, avg=36158.25, stdev=532.29, samples=4 00:35:20.794 iops : min= 8852, max= 9166, avg=9039.50, stdev=133.19, samples=4 00:35:20.794 write: IOPS=9061, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2007msec); 0 zone resets 00:35:20.794 slat (nsec): min=1574, max=241133, avg=1749.56, stdev=1960.63 00:35:20.794 clat (usec): min=2538, max=12398, avg=6275.30, stdev=512.87 00:35:20.794 lat (usec): min=2554, max=12400, avg=6277.05, stdev=512.69 00:35:20.794 clat percentiles (usec): 00:35:20.794 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5932], 00:35:20.794 | 30.00th=[ 6063], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:35:20.794 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 7046], 00:35:20.794 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[10552], 99.95th=[11469], 00:35:20.795 | 99.99th=[11731] 00:35:20.795 bw ( KiB/s): min=35968, max=36488, per=99.99%, avg=36243.75, stdev=226.06, samples=4 00:35:20.795 iops : min= 8992, max= 9122, avg=9060.75, stdev=56.60, samples=4 00:35:20.795 lat (msec) : 4=0.08%, 10=99.80%, 20=0.13% 00:35:20.795 cpu : usr=62.46%, sys=32.90%, ctx=130, majf=0, minf=5 00:35:20.795 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:35:20.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:20.795 issued rwts: total=18159,18187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:20.795 00:35:20.795 Run status group 0 (all jobs): 00:35:20.795 READ: bw=35.3MiB/s (37.1MB/s), 35.3MiB/s-35.3MiB/s (37.1MB/s-37.1MB/s), io=70.9MiB (74.4MB), run=2007-2007msec 00:35:20.795 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.5MB), run=2007-2007msec 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:35:20.795 11:43:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:35:20.795 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:35:20.795 fio-3.35 00:35:20.795 Starting 1 thread 00:35:20.795 EAL: No free 2048 kB hugepages reported on node 1 00:35:21.727 [2024-06-10 11:43:46.802548] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a860 is same with the state(5) to be set 00:35:21.727 [2024-06-10 11:43:46.802645] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100a860 is same with the state(5) to be set 00:35:23.624 00:35:23.624 test: (groupid=0, jobs=1): err= 0: pid=4095130: Mon Jun 10 11:43:48 2024 00:35:23.624 read: IOPS=8672, BW=136MiB/s (142MB/s)(272MiB/2005msec) 00:35:23.624 slat (usec): min=3, max=115, avg= 3.70, stdev= 1.39 00:35:23.624 clat (usec): min=2190, max=53410, avg=8995.68, stdev=4127.03 00:35:23.624 lat (usec): min=2193, max=53414, avg=8999.38, stdev=4127.12 00:35:23.624 clat percentiles (usec): 00:35:23.624 | 1.00th=[ 4555], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6587], 00:35:23.624 | 30.00th=[ 7373], 40.00th=[ 7898], 50.00th=[ 8586], 60.00th=[ 9110], 00:35:23.624 | 70.00th=[10028], 80.00th=[10814], 90.00th=[11863], 95.00th=[12649], 00:35:23.624 | 99.00th=[16188], 99.50th=[47449], 99.90th=[52167], 99.95th=[53216], 00:35:23.624 | 99.99th=[53216] 00:35:23.624 bw ( KiB/s): min=53184, max=87776, per=50.56%, avg=70160.00, stdev=17752.98, samples=4 00:35:23.624 iops : min= 3324, max= 5486, avg=4385.00, stdev=1109.56, samples=4 00:35:23.624 write: IOPS=5345, BW=83.5MiB/s (87.6MB/s)(144MiB/1720msec); 0 zone resets 00:35:23.624 slat (usec): min=40, max=292, avg=41.63, stdev= 5.14 00:35:23.624 clat (usec): min=2938, max=17315, avg=9972.15, stdev=1790.94 00:35:23.624 lat (usec): min=2978, max=17356, avg=10013.78, stdev=1791.82 00:35:23.624 clat percentiles (usec): 00:35:23.624 | 1.00th=[ 6456], 5.00th=[ 7504], 10.00th=[ 7963], 20.00th=[ 8455], 00:35:23.624 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10290], 00:35:23.624 | 70.00th=[10683], 80.00th=[11338], 90.00th=[12387], 95.00th=[13304], 00:35:23.624 | 99.00th=[14746], 99.50th=[15401], 99.90th=[16909], 99.95th=[17171], 00:35:23.624 | 99.99th=[17433] 00:35:23.624 bw ( KiB/s): min=55744, max=90848, per=85.52%, avg=73152.00, stdev=18074.67, samples=4 00:35:23.624 iops : min= 3484, max= 5678, avg=4572.00, stdev=1129.67, samples=4 00:35:23.624 lat (msec) : 4=0.27%, 10=63.84%, 20=35.41%, 50=0.26%, 100=0.22% 00:35:23.624 cpu : usr=86.68%, sys=11.88%, ctx=41, majf=0, minf=2 00:35:23.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:35:23.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:23.624 issued rwts: total=17389,9195,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:23.624 00:35:23.624 Run status group 0 (all jobs): 00:35:23.624 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=272MiB (285MB), run=2005-2005msec 00:35:23.624 WRITE: bw=83.5MiB/s (87.6MB/s), 83.5MiB/s-83.5MiB/s (87.6MB/s-87.6MB/s), io=144MiB (151MB), run=1720-1720msec 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:23.624 rmmod nvme_tcp 00:35:23.624 rmmod nvme_fabrics 00:35:23.624 rmmod nvme_keyring 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 4093916 ']' 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 4093916 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 4093916 ']' 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 4093916 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4093916 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4093916' 00:35:23.624 killing process with pid 4093916 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 4093916 00:35:23.624 11:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 4093916 00:35:23.883 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:23.883 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:23.883 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:23.883 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:23.883 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:23.883 11:43:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.883 11:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:23.883 11:43:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.418 11:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:26.418 00:35:26.418 real 0m19.596s 00:35:26.418 user 1m1.028s 00:35:26.418 sys 0m9.297s 00:35:26.418 11:43:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:26.418 11:43:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.418 ************************************ 00:35:26.418 END TEST nvmf_fio_host 00:35:26.418 ************************************ 00:35:26.418 11:43:51 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:35:26.418 11:43:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:26.418 11:43:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:26.418 11:43:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:26.418 ************************************ 00:35:26.418 START TEST nvmf_failover 00:35:26.418 ************************************ 00:35:26.418 11:43:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:35:26.418 * Looking for test storage... 00:35:26.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:26.418 11:43:51 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.418 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:35:26.418 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.418 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.418 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.418 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.418 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.418 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.418 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.418 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:35:26.419 11:43:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:34.542 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:34.542 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:34.542 Found net devices under 0000:af:00.0: cvl_0_0 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:34.542 Found net devices under 0000:af:00.1: cvl_0_1 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:34.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:35:34.542 00:35:34.542 --- 10.0.0.2 ping statistics --- 00:35:34.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.542 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:35:34.542 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:34.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:35:34.802 00:35:34.802 --- 10.0.0.1 ping statistics --- 00:35:34.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.802 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=4099971 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 4099971 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 4099971 ']' 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:34.802 11:43:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:34.802 [2024-06-10 11:43:59.749483] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:35:34.802 [2024-06-10 11:43:59.749543] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:34.802 EAL: No free 2048 kB hugepages reported on node 1 00:35:34.802 [2024-06-10 11:43:59.865658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:35.061 [2024-06-10 11:43:59.952408] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.061 [2024-06-10 11:43:59.952449] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.061 [2024-06-10 11:43:59.952462] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.061 [2024-06-10 11:43:59.952477] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.061 [2024-06-10 11:43:59.952487] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.061 [2024-06-10 11:43:59.952533] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:35.061 [2024-06-10 11:43:59.952646] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:35:35.061 [2024-06-10 11:43:59.952647] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.626 11:44:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:35.626 11:44:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:35:35.626 11:44:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:35.626 11:44:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:35.626 11:44:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:35.626 11:44:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:35.626 11:44:00 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:35.884 [2024-06-10 11:44:00.905524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.884 11:44:00 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:36.141 Malloc0 00:35:36.142 11:44:01 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:36.400 11:44:01 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:36.658 11:44:01 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:36.916 [2024-06-10 11:44:01.886416] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.916 11:44:01 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:37.174 [2024-06-10 11:44:02.115134] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:37.174 11:44:02 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:37.431 [2024-06-10 11:44:02.351982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:35:37.431 11:44:02 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4100412 00:35:37.431 11:44:02 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:35:37.431 11:44:02 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:37.431 11:44:02 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4100412 /var/tmp/bdevperf.sock 00:35:37.431 11:44:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 4100412 ']' 00:35:37.431 11:44:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:37.432 11:44:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:37.432 11:44:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:37.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:37.432 11:44:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:37.432 11:44:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:38.397 11:44:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:38.397 11:44:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:35:38.397 11:44:03 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:38.685 NVMe0n1 00:35:38.685 11:44:03 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:38.954 00:35:38.954 11:44:03 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4100695 00:35:38.955 11:44:03 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:38.955 11:44:03 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:35:40.328 11:44:04 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:40.328 [2024-06-10 11:44:05.208669] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208733] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208743] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208752] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208760] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208769] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208777] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208786] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208794] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208803] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208811] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208820] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208829] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208838] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208847] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208855] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208879] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208889] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208897] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208907] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208915] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208924] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208932] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208941] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208950] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208959] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208976] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.328 [2024-06-10 11:44:05.208985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.208993] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209002] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209011] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209021] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209030] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209038] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209047] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209056] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209064] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209073] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209082] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209090] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209099] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209108] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209137] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209146] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209155] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209163] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209172] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209180] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209189] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209198] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209216] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209224] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209233] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209242] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209250] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209259] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209268] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209276] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209285] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209294] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209303] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209311] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209320] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209329] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209337] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 [2024-06-10 11:44:05.209345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1852f90 is same with the state(5) to be set 00:35:40.329 11:44:05 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:35:43.613 11:44:08 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:43.613 00:35:43.613 11:44:08 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:43.870 [2024-06-10 11:44:08.729448] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854540 is same with the state(5) to be set 00:35:43.870 11:44:08 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:35:47.151 11:44:11 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:47.151 [2024-06-10 11:44:11.976558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.151 11:44:12 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:35:48.090 11:44:13 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:48.349 [2024-06-10 11:44:13.226625] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226669] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226696] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226709] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226720] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226732] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226745] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226756] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226768] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226792] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226815] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226828] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226852] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226863] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226894] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226907] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226920] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226944] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226955] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.226993] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.227005] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.227016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.227029] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.227041] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.227053] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.227065] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.227079] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.227091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 [2024-06-10 11:44:13.227104] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854c40 is same with the state(5) to be set 00:35:48.349 11:44:13 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 4100695 00:35:54.912 0 00:35:54.912 11:44:19 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 4100412 00:35:54.912 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 4100412 ']' 00:35:54.912 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 4100412 00:35:54.912 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:35:54.912 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:54.912 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4100412 00:35:54.912 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:54.912 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:54.912 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4100412' 00:35:54.912 killing process with pid 4100412 00:35:54.912 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 4100412 00:35:54.912 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 4100412 00:35:54.912 11:44:19 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:54.912 [2024-06-10 11:44:02.432310] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:35:54.912 [2024-06-10 11:44:02.432381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4100412 ] 00:35:54.912 EAL: No free 2048 kB hugepages reported on node 1 00:35:54.912 [2024-06-10 11:44:02.553232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.912 [2024-06-10 11:44:02.636050] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.912 Running I/O for 15 seconds... 00:35:54.912 [2024-06-10 11:44:05.211328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.912 [2024-06-10 11:44:05.211824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.912 [2024-06-10 11:44:05.211837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.211852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.211865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.211880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.211892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.211907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.211919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.211933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.211946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.211961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.211973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.211987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.913 [2024-06-10 11:44:05.212906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.913 [2024-06-10 11:44:05.212920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.914 [2024-06-10 11:44:05.212933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.212947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.914 [2024-06-10 11:44:05.212960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.212975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.914 [2024-06-10 11:44:05.212988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.914 [2024-06-10 11:44:05.213014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.914 [2024-06-10 11:44:05.213041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.914 [2024-06-10 11:44:05.213074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.914 [2024-06-10 11:44:05.213101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.914 [2024-06-10 11:44:05.213128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.213981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.213993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.914 [2024-06-10 11:44:05.214007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.914 [2024-06-10 11:44:05.214020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.915 [2024-06-10 11:44:05.214455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.214497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92872 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.214510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.915 [2024-06-10 11:44:05.214536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.214547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92880 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.214559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.915 [2024-06-10 11:44:05.214586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.214598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92888 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.214611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.915 [2024-06-10 11:44:05.214634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.214644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92896 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.214656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.915 [2024-06-10 11:44:05.214680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.214690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92904 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.214702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.915 [2024-06-10 11:44:05.214726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.214736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92912 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.214748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.915 [2024-06-10 11:44:05.214770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.214781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92920 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.214793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.915 [2024-06-10 11:44:05.214816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.214826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92928 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.214841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.915 [2024-06-10 11:44:05.214868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.214879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92936 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.214891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.915 [2024-06-10 11:44:05.214914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.214924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92944 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.214936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.915 [2024-06-10 11:44:05.214959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.214970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92952 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.214982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.214995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.915 [2024-06-10 11:44:05.215005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.215016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92960 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.215028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.215041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.915 [2024-06-10 11:44:05.215050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.215061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92968 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.215073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.215086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.915 [2024-06-10 11:44:05.215096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.915 [2024-06-10 11:44:05.215106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92976 len:8 PRP1 0x0 PRP2 0x0 00:35:54.915 [2024-06-10 11:44:05.215118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.915 [2024-06-10 11:44:05.215131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.916 [2024-06-10 11:44:05.215141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.916 [2024-06-10 11:44:05.215151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92984 len:8 PRP1 0x0 PRP2 0x0 00:35:54.916 [2024-06-10 11:44:05.215163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:05.215176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.916 [2024-06-10 11:44:05.215186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.916 [2024-06-10 11:44:05.215198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92992 len:8 PRP1 0x0 PRP2 0x0 00:35:54.916 [2024-06-10 11:44:05.215211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:05.215260] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16e58b0 was disconnected and freed. reset controller. 00:35:54.916 [2024-06-10 11:44:05.215277] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:35:54.916 [2024-06-10 11:44:05.215305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:54.916 [2024-06-10 11:44:05.215319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:05.215332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:54.916 [2024-06-10 11:44:05.215344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:05.215358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:54.916 [2024-06-10 11:44:05.215371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:05.215383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:54.916 [2024-06-10 11:44:05.215396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:05.215408] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.916 [2024-06-10 11:44:05.219157] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.916 [2024-06-10 11:44:05.219193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c73a0 (9): Bad file descriptor 00:35:54.916 [2024-06-10 11:44:05.337036] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:54.916 [2024-06-10 11:44:08.732207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.916 [2024-06-10 11:44:08.732864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.916 [2024-06-10 11:44:08.732879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.732891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.732906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.732919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.732933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.732946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.732960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.732973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.732987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.917 [2024-06-10 11:44:08.733566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.917 [2024-06-10 11:44:08.733598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.917 [2024-06-10 11:44:08.733626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.917 [2024-06-10 11:44:08.733653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.917 [2024-06-10 11:44:08.733681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.917 [2024-06-10 11:44:08.733708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.917 [2024-06-10 11:44:08.733736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.917 [2024-06-10 11:44:08.733941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.917 [2024-06-10 11:44:08.733953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.733967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.918 [2024-06-10 11:44:08.733980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.733995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.918 [2024-06-10 11:44:08.734007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.918 [2024-06-10 11:44:08.734035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.918 [2024-06-10 11:44:08.734062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.918 [2024-06-10 11:44:08.734089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.918 [2024-06-10 11:44:08.734116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.734156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92704 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.734169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:54.918 [2024-06-10 11:44:08.734221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:54.918 [2024-06-10 11:44:08.734247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:54.918 [2024-06-10 11:44:08.734273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:54.918 [2024-06-10 11:44:08.734298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c73a0 is same with the state(5) to be set 00:35:54.918 [2024-06-10 11:44:08.734484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.734497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.734508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92712 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.734521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.734545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.734556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92720 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.734569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.734599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.734609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92728 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.734622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.734644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.734655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92736 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.734668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.734691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.734704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92744 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.734717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.734740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.734751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92752 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.734763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.734786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.734796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92760 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.734809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.734831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.734842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92768 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.734854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.734876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.734887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92776 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.734899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.734922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.734933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92784 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.734945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.734958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.734967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.734978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92792 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.734990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.735002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.735012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.735023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92800 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.735036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.735048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.735060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.735070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92808 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.735082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.735095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.735104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.735115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92816 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.735127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.735140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.735150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.735161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92824 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.735173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.735185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.735195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.735205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92832 len:8 PRP1 0x0 PRP2 0x0 00:35:54.918 [2024-06-10 11:44:08.735217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.918 [2024-06-10 11:44:08.735230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.918 [2024-06-10 11:44:08.735240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.918 [2024-06-10 11:44:08.735251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92840 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92848 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92856 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92864 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92872 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92880 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92888 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92896 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92904 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92912 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92920 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92928 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92936 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92944 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92952 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92960 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.735962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.735974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.735985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.735995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92968 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.736007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.736020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.736029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.736041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92976 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.736054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.736066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.736077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.736088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92984 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.736100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.736113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.736124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.736137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92992 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.736149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.736162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.736171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.736182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93000 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.736194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.736207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.736217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.736228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93008 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.736240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.736253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.736263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.919 [2024-06-10 11:44:08.736273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93016 len:8 PRP1 0x0 PRP2 0x0 00:35:54.919 [2024-06-10 11:44:08.736285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.919 [2024-06-10 11:44:08.736298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.919 [2024-06-10 11:44:08.736308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.736319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93024 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.736331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.736344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.736354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.736364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93032 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.736376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.736389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.736399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.754663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93040 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.754686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.754704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.754718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.754733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92088 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.754750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.754771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.754786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.754801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92096 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.754819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.754836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.754851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.754865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92104 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.754882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.754900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.754914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.754929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92112 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.754946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.754964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.754978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.754993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92120 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92128 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92136 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92144 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92152 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92160 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92168 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92176 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92184 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92192 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92200 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93048 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92208 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92216 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92224 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.755948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.920 [2024-06-10 11:44:08.755963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92232 len:8 PRP1 0x0 PRP2 0x0 00:35:54.920 [2024-06-10 11:44:08.755980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.920 [2024-06-10 11:44:08.755998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.920 [2024-06-10 11:44:08.756011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92240 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92248 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92256 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92264 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92272 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92280 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92288 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92296 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92304 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92312 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92320 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92328 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92336 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92344 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92352 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.756937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.756955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.756969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.756983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92360 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.757000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.757018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.757032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.757046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92368 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.757062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.757080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.757094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.757108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92376 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.757125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.757143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.757156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.921 [2024-06-10 11:44:08.757171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92384 len:8 PRP1 0x0 PRP2 0x0 00:35:54.921 [2024-06-10 11:44:08.757188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.921 [2024-06-10 11:44:08.757206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.921 [2024-06-10 11:44:08.757219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.757234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92392 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.757253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.757271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.757284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.757299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92400 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.757317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.757337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.757350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.757365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92408 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.757381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.757400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.757413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.757428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92416 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.757445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.757462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.757476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.757490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92424 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.757508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.757525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.757539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.757554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92432 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.757570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.757594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.757608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.757641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92440 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.757664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.757689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.757707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.757727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92448 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.757751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.757775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.757793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.757817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92456 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.757841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.757865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.757884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.757904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92464 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.757927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.757954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.757973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.757993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92472 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.758017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.758041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.758059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.758079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92480 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.758103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.758127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.758146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.758166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92488 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.758189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.758213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.758232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.758252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92496 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.758275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.758299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.758318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.758337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92504 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.758361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.758385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.758404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.758424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92512 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.758447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.758474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.758493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.758514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92520 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.758538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.758563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.758591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.758612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92528 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.758635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.758662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.758681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.758701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92536 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.758725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.758749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.758771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.758792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92544 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.758816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.758841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.758860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.758880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92552 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.758904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.758929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.758948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.758968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92560 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.758993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.759018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.759036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.922 [2024-06-10 11:44:08.759057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92568 len:8 PRP1 0x0 PRP2 0x0 00:35:54.922 [2024-06-10 11:44:08.759080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.922 [2024-06-10 11:44:08.759105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.922 [2024-06-10 11:44:08.759124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.759144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92576 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.759167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.759195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.759213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.759233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92584 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.759257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.759281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.759300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.759320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92032 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.759345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.759372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.759391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.759411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92040 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.759435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.759459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.759479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.759500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92048 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.759523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.759548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.759566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.759598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92056 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.759621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.759646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.759665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.759685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92064 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.759708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.759733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.759752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.759772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92072 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.759796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.759820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.759839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.759860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92080 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.759886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.759911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.759930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.759952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92592 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.759976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.760000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.760019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.760039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92600 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.760063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.760089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.760108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.760129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92608 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.760152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.760177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.760195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.760215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92616 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.760239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.760263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.760283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.760303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92624 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.760327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.760351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.760370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.760390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92632 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.760414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.760438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.760457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.760477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92640 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.760501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.760526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.760548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.760568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92648 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.760599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.760624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.760643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.760663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92656 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.760688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.760713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.760733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.760755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92664 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.760780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.760807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.760827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.760848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92672 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.769501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.769537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.769561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.769598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92680 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.769629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.769661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.769686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.769713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92688 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.769743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.769775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.923 [2024-06-10 11:44:08.769800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.923 [2024-06-10 11:44:08.769826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92696 len:8 PRP1 0x0 PRP2 0x0 00:35:54.923 [2024-06-10 11:44:08.769856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.923 [2024-06-10 11:44:08.769888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.924 [2024-06-10 11:44:08.769913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.924 [2024-06-10 11:44:08.769939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92704 len:8 PRP1 0x0 PRP2 0x0 00:35:54.924 [2024-06-10 11:44:08.769970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:08.770066] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16c16a0 was disconnected and freed. reset controller. 00:35:54.924 [2024-06-10 11:44:08.770100] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:35:54.924 [2024-06-10 11:44:08.770131] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.924 [2024-06-10 11:44:08.770218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c73a0 (9): Bad file descriptor 00:35:54.924 [2024-06-10 11:44:08.779357] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.924 [2024-06-10 11:44:08.989296] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:54.924 [2024-06-10 11:44:13.228355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.228981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.228993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.924 [2024-06-10 11:44:13.229370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.924 [2024-06-10 11:44:13.229384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.925 [2024-06-10 11:44:13.229411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.925 [2024-06-10 11:44:13.229438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.925 [2024-06-10 11:44:13.229467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.229975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.229990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.925 [2024-06-10 11:44:13.230443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.925 [2024-06-10 11:44:13.230457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.926 [2024-06-10 11:44:13.230470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.926 [2024-06-10 11:44:13.230497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.926 [2024-06-10 11:44:13.230524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.926 [2024-06-10 11:44:13.230551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.926 [2024-06-10 11:44:13.230581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.926 [2024-06-10 11:44:13.230609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.926 [2024-06-10 11:44:13.230641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.926 [2024-06-10 11:44:13.230668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.926 [2024-06-10 11:44:13.230695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.926 [2024-06-10 11:44:13.230723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.926 [2024-06-10 11:44:13.230749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.926 [2024-06-10 11:44:13.230777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.926 [2024-06-10 11:44:13.230805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.926 [2024-06-10 11:44:13.230857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44688 len:8 PRP1 0x0 PRP2 0x0 00:35:54.926 [2024-06-10 11:44:13.230870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.926 [2024-06-10 11:44:13.230895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.926 [2024-06-10 11:44:13.230906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44696 len:8 PRP1 0x0 PRP2 0x0 00:35:54.926 [2024-06-10 11:44:13.230918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.926 [2024-06-10 11:44:13.230941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.926 [2024-06-10 11:44:13.230952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44704 len:8 PRP1 0x0 PRP2 0x0 00:35:54.926 [2024-06-10 11:44:13.230964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.230976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.926 [2024-06-10 11:44:13.230986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.926 [2024-06-10 11:44:13.230997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44712 len:8 PRP1 0x0 PRP2 0x0 00:35:54.926 [2024-06-10 11:44:13.231010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.231023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.926 [2024-06-10 11:44:13.231032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.926 [2024-06-10 11:44:13.231043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44720 len:8 PRP1 0x0 PRP2 0x0 00:35:54.926 [2024-06-10 11:44:13.231056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.231070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.926 [2024-06-10 11:44:13.231080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.926 [2024-06-10 11:44:13.231090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44728 len:8 PRP1 0x0 PRP2 0x0 00:35:54.926 [2024-06-10 11:44:13.231102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.231115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.926 [2024-06-10 11:44:13.231125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.926 [2024-06-10 11:44:13.231135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44736 len:8 PRP1 0x0 PRP2 0x0 00:35:54.926 [2024-06-10 11:44:13.231147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.231160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.926 [2024-06-10 11:44:13.231172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.926 [2024-06-10 11:44:13.231182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44744 len:8 PRP1 0x0 PRP2 0x0 00:35:54.926 [2024-06-10 11:44:13.231195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.231207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.926 [2024-06-10 11:44:13.231217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.926 [2024-06-10 11:44:13.231228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44752 len:8 PRP1 0x0 PRP2 0x0 00:35:54.926 [2024-06-10 11:44:13.231240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.231253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.926 [2024-06-10 11:44:13.231262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.926 [2024-06-10 11:44:13.231272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44760 len:8 PRP1 0x0 PRP2 0x0 00:35:54.926 [2024-06-10 11:44:13.231285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.231298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.926 [2024-06-10 11:44:13.231307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.926 [2024-06-10 11:44:13.231318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44768 len:8 PRP1 0x0 PRP2 0x0 00:35:54.926 [2024-06-10 11:44:13.231330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.926 [2024-06-10 11:44:13.231343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.926 [2024-06-10 11:44:13.231352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.926 [2024-06-10 11:44:13.231363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44776 len:8 PRP1 0x0 PRP2 0x0 00:35:54.926 [2024-06-10 11:44:13.231375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.231398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.231410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44784 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.231422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.231446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.231456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44792 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.231469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.231491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.231502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44800 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.231514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.231539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.231549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44808 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.231561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.231591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.231601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44816 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.231613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.231635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.231646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44824 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.231658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.231681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.231692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44832 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.231705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.231728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.231738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44840 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.231750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.231773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.231783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44848 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.231795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.231820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.231831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44856 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.231843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.231865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.231876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44864 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.231890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.231913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.231923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44872 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.231936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.231958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.231968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44880 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.231981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.231994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.232004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.232014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44888 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.232026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.232039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.232049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.232059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44896 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.232072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.232084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.232094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.232104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44904 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.232117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.232130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.232139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.232149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44912 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.232162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.232176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.232186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.232196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44920 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.232208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.232221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.232231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.232243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44928 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.232255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.232268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.232278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.232289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44936 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.232301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.232313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.232323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.232334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44944 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.232346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.232359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.232369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.232379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44952 len:8 PRP1 0x0 PRP2 0x0 00:35:54.927 [2024-06-10 11:44:13.232392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.927 [2024-06-10 11:44:13.232405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.927 [2024-06-10 11:44:13.232415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.927 [2024-06-10 11:44:13.232425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44960 len:8 PRP1 0x0 PRP2 0x0 00:35:54.928 [2024-06-10 11:44:13.232437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.928 [2024-06-10 11:44:13.232450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.928 [2024-06-10 11:44:13.232460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.928 [2024-06-10 11:44:13.232471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44968 len:8 PRP1 0x0 PRP2 0x0 00:35:54.928 [2024-06-10 11:44:13.232483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.928 [2024-06-10 11:44:13.232495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.928 [2024-06-10 11:44:13.232505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.928 [2024-06-10 11:44:13.232516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44976 len:8 PRP1 0x0 PRP2 0x0 00:35:54.928 [2024-06-10 11:44:13.232528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.928 [2024-06-10 11:44:13.232544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.928 [2024-06-10 11:44:13.232553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.928 [2024-06-10 11:44:13.232564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44984 len:8 PRP1 0x0 PRP2 0x0 00:35:54.928 [2024-06-10 11:44:13.232581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.928 [2024-06-10 11:44:13.232596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.928 [2024-06-10 11:44:13.232605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.928 [2024-06-10 11:44:13.243666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44992 len:8 PRP1 0x0 PRP2 0x0 00:35:54.928 [2024-06-10 11:44:13.243683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.928 [2024-06-10 11:44:13.243697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.928 [2024-06-10 11:44:13.243708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.928 [2024-06-10 11:44:13.243718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44288 len:8 PRP1 0x0 PRP2 0x0 00:35:54.928 [2024-06-10 11:44:13.243731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.928 [2024-06-10 11:44:13.243745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:54.928 [2024-06-10 11:44:13.243756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:54.928 [2024-06-10 11:44:13.243768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44296 len:8 PRP1 0x0 PRP2 0x0 00:35:54.928 [2024-06-10 11:44:13.243781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.928 [2024-06-10 11:44:13.243831] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16c16a0 was disconnected and freed. reset controller. 00:35:54.928 [2024-06-10 11:44:13.243847] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:35:54.928 [2024-06-10 11:44:13.243876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:54.928 [2024-06-10 11:44:13.243890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.928 [2024-06-10 11:44:13.243905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:54.928 [2024-06-10 11:44:13.243918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.928 [2024-06-10 11:44:13.243931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:54.928 [2024-06-10 11:44:13.243945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.928 [2024-06-10 11:44:13.243958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:54.928 [2024-06-10 11:44:13.243971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.928 [2024-06-10 11:44:13.243984] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:54.928 [2024-06-10 11:44:13.244013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c73a0 (9): Bad file descriptor 00:35:54.928 [2024-06-10 11:44:13.247986] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:54.928 [2024-06-10 11:44:13.287282] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:54.928 00:35:54.928 Latency(us) 00:35:54.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.928 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:54.928 Verification LBA range: start 0x0 length 0x4000 00:35:54.928 NVMe0n1 : 15.01 8373.55 32.71 818.41 0.00 13895.57 557.06 49073.36 00:35:54.928 =================================================================================================================== 00:35:54.928 Total : 8373.55 32.71 818.41 0.00 13895.57 557.06 49073.36 00:35:54.928 Received shutdown signal, test time was about 15.000000 seconds 00:35:54.928 00:35:54.928 Latency(us) 00:35:54.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.928 =================================================================================================================== 00:35:54.928 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:54.928 11:44:19 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:35:54.928 11:44:19 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:35:54.928 11:44:19 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:35:54.928 11:44:19 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4103206 00:35:54.928 11:44:19 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:35:54.928 11:44:19 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4103206 /var/tmp/bdevperf.sock 00:35:54.928 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 4103206 ']' 00:35:54.928 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:54.928 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:54.928 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:54.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:54.928 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:54.928 11:44:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:35:55.493 11:44:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:55.493 11:44:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:35:55.493 11:44:20 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:55.493 [2024-06-10 11:44:20.567149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:55.493 11:44:20 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:35:55.749 [2024-06-10 11:44:20.803845] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:35:55.749 11:44:20 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:56.312 NVMe0n1 00:35:56.312 11:44:21 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:56.569 00:35:56.569 11:44:21 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:57.135 00:35:57.135 11:44:21 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:57.135 11:44:21 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:35:57.135 11:44:22 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:57.392 11:44:22 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:36:00.667 11:44:25 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:00.667 11:44:25 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:36:00.667 11:44:25 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:00.667 11:44:25 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4104276 00:36:00.667 11:44:25 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 4104276 00:36:02.041 0 00:36:02.041 11:44:26 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:02.041 [2024-06-10 11:44:19.488718] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:36:02.041 [2024-06-10 11:44:19.488784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103206 ] 00:36:02.041 EAL: No free 2048 kB hugepages reported on node 1 00:36:02.041 [2024-06-10 11:44:19.609480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.041 [2024-06-10 11:44:19.686306] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.041 [2024-06-10 11:44:22.429592] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:36:02.041 [2024-06-10 11:44:22.429645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.041 [2024-06-10 11:44:22.429662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:02.041 [2024-06-10 11:44:22.429677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.041 [2024-06-10 11:44:22.429690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:02.041 [2024-06-10 11:44:22.429704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.041 [2024-06-10 11:44:22.429717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:02.041 [2024-06-10 11:44:22.429730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:02.041 [2024-06-10 11:44:22.429743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:02.041 [2024-06-10 11:44:22.429755] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:02.041 [2024-06-10 11:44:22.429788] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:02.041 [2024-06-10 11:44:22.429808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad03a0 (9): Bad file descriptor 00:36:02.041 [2024-06-10 11:44:22.435947] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:02.041 Running I/O for 1 seconds... 00:36:02.041 00:36:02.041 Latency(us) 00:36:02.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:02.041 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:02.041 Verification LBA range: start 0x0 length 0x4000 00:36:02.041 NVMe0n1 : 1.01 8448.69 33.00 0.00 0.00 15082.45 1382.81 13526.63 00:36:02.041 =================================================================================================================== 00:36:02.041 Total : 8448.69 33.00 0.00 0.00 15082.45 1382.81 13526.63 00:36:02.041 11:44:26 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:02.041 11:44:26 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:36:02.041 11:44:27 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:02.299 11:44:27 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:02.299 11:44:27 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:36:02.557 11:44:27 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:02.815 11:44:27 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:36:06.097 11:44:30 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:06.097 11:44:30 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:36:06.097 11:44:31 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 4103206 00:36:06.097 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 4103206 ']' 00:36:06.097 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 4103206 00:36:06.097 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:36:06.097 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:06.097 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4103206 00:36:06.097 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:36:06.097 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:36:06.097 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4103206' 00:36:06.097 killing process with pid 4103206 00:36:06.097 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 4103206 00:36:06.097 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 4103206 00:36:06.355 11:44:31 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:36:06.355 11:44:31 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:06.614 rmmod nvme_tcp 00:36:06.614 rmmod nvme_fabrics 00:36:06.614 rmmod nvme_keyring 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 4099971 ']' 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 4099971 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 4099971 ']' 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 4099971 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4099971 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4099971' 00:36:06.614 killing process with pid 4099971 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 4099971 00:36:06.614 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 4099971 00:36:06.874 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:06.874 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:06.874 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:06.874 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:06.874 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:06.874 11:44:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:06.874 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:06.874 11:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:09.412 11:44:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:09.412 00:36:09.412 real 0m42.830s 00:36:09.412 user 2m9.433s 00:36:09.412 sys 0m11.554s 00:36:09.412 11:44:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:09.412 11:44:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:09.412 ************************************ 00:36:09.412 END TEST nvmf_failover 00:36:09.412 ************************************ 00:36:09.412 11:44:33 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:36:09.412 11:44:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:09.412 11:44:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:09.412 11:44:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:09.412 ************************************ 00:36:09.412 START TEST nvmf_host_discovery 00:36:09.412 ************************************ 00:36:09.412 11:44:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:36:09.412 * Looking for test storage... 00:36:09.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:36:09.412 11:44:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:17.535 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:17.535 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:17.535 Found net devices under 0000:af:00.0: cvl_0_0 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:17.535 Found net devices under 0000:af:00.1: cvl_0_1 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:17.535 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:17.536 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:17.536 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:17.536 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:17.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:17.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:36:17.795 00:36:17.795 --- 10.0.0.2 ping statistics --- 00:36:17.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.795 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:17.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:17.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:36:17.795 00:36:17.795 --- 10.0.0.1 ping statistics --- 00:36:17.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.795 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:17.795 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:18.099 11:44:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:36:18.099 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:18.099 11:44:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:18.099 11:44:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.099 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=4109733 00:36:18.099 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 4109733 00:36:18.099 11:44:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:36:18.099 11:44:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 4109733 ']' 00:36:18.099 11:44:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.099 11:44:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:18.099 11:44:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.099 11:44:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:18.099 11:44:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.099 [2024-06-10 11:44:42.987895] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:36:18.100 [2024-06-10 11:44:42.987959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:18.100 EAL: No free 2048 kB hugepages reported on node 1 00:36:18.100 [2024-06-10 11:44:43.107276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.365 [2024-06-10 11:44:43.193260] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:18.365 [2024-06-10 11:44:43.193302] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:18.365 [2024-06-10 11:44:43.193316] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:18.365 [2024-06-10 11:44:43.193328] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:18.365 [2024-06-10 11:44:43.193338] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:18.365 [2024-06-10 11:44:43.193367] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:36:18.933 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:18.933 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.934 [2024-06-10 11:44:43.881338] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.934 [2024-06-10 11:44:43.893563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.934 null0 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.934 null1 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4109807 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4109807 /tmp/host.sock 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 4109807 ']' 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:36:18.934 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:18.934 11:44:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:18.934 [2024-06-10 11:44:43.976376] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:36:18.934 [2024-06-10 11:44:43.976434] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109807 ] 00:36:18.934 EAL: No free 2048 kB hugepages reported on node 1 00:36:19.193 [2024-06-10 11:44:44.097841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.193 [2024-06-10 11:44:44.182842] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.131 11:44:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:20.131 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:20.132 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.132 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:36:20.132 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:20.132 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.132 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.132 [2024-06-10 11:44:45.233157] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:20.391 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:36:20.392 11:44:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:36:20.960 [2024-06-10 11:44:45.943299] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:20.960 [2024-06-10 11:44:45.943328] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:20.960 [2024-06-10 11:44:45.943346] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:21.219 [2024-06-10 11:44:46.070798] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:36:21.219 [2024-06-10 11:44:46.295154] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:21.219 [2024-06-10 11:44:46.295180] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:21.479 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:21.740 [2024-06-10 11:44:46.777553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:21.740 [2024-06-10 11:44:46.778036] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:21.740 [2024-06-10 11:44:46.778063] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:36:21.740 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.999 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:22.000 11:44:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:22.000 [2024-06-10 11:44:46.904840] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:36:22.000 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.000 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:36:22.000 11:44:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:36:22.000 [2024-06-10 11:44:47.007605] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:22.000 [2024-06-10 11:44:47.007628] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:36:22.000 [2024-06-10 11:44:47.007638] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:22.937 11:44:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:22.938 11:44:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:36:22.938 11:44:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:36:22.938 11:44:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:22.938 11:44:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:22.938 11:44:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.938 11:44:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:22.938 11:44:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:22.938 11:44:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:22.938 11:44:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:22.938 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.198 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:36:23.198 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:36:23.198 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:36:23.198 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:23.198 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:23.198 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.198 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:23.198 [2024-06-10 11:44:48.066126] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:36:23.198 [2024-06-10 11:44:48.066152] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:23.198 [2024-06-10 11:44:48.066681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.198 [2024-06-10 11:44:48.066704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.198 [2024-06-10 11:44:48.066720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.198 [2024-06-10 11:44:48.066733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.198 [2024-06-10 11:44:48.066747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.198 [2024-06-10 11:44:48.066761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.199 [2024-06-10 11:44:48.066774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:23.199 [2024-06-10 11:44:48.066786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:23.199 [2024-06-10 11:44:48.066800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742d70 is same with the state(5) to be set 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:23.199 [2024-06-10 11:44:48.076689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x742d70 (9): Bad file descriptor 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:23.199 [2024-06-10 11:44:48.086733] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:23.199 [2024-06-10 11:44:48.086975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.199 [2024-06-10 11:44:48.086997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x742d70 with addr=10.0.0.2, port=4420 00:36:23.199 [2024-06-10 11:44:48.087011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742d70 is same with the state(5) to be set 00:36:23.199 [2024-06-10 11:44:48.087030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x742d70 (9): Bad file descriptor 00:36:23.199 [2024-06-10 11:44:48.087047] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:23.199 [2024-06-10 11:44:48.087060] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:23.199 [2024-06-10 11:44:48.087073] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:23.199 [2024-06-10 11:44:48.087090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.199 [2024-06-10 11:44:48.096800] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:23.199 [2024-06-10 11:44:48.097132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.199 [2024-06-10 11:44:48.097152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x742d70 with addr=10.0.0.2, port=4420 00:36:23.199 [2024-06-10 11:44:48.097166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742d70 is same with the state(5) to be set 00:36:23.199 [2024-06-10 11:44:48.097183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x742d70 (9): Bad file descriptor 00:36:23.199 [2024-06-10 11:44:48.097201] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:23.199 [2024-06-10 11:44:48.097213] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:23.199 [2024-06-10 11:44:48.097225] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:23.199 [2024-06-10 11:44:48.097240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.199 [2024-06-10 11:44:48.106862] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:23.199 [2024-06-10 11:44:48.107216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.199 [2024-06-10 11:44:48.107235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x742d70 with addr=10.0.0.2, port=4420 00:36:23.199 [2024-06-10 11:44:48.107248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742d70 is same with the state(5) to be set 00:36:23.199 [2024-06-10 11:44:48.107266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x742d70 (9): Bad file descriptor 00:36:23.199 [2024-06-10 11:44:48.107283] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:23.199 [2024-06-10 11:44:48.107295] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:23.199 [2024-06-10 11:44:48.107307] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:23.199 [2024-06-10 11:44:48.107331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.199 [2024-06-10 11:44:48.116929] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:23.199 [2024-06-10 11:44:48.117267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.199 [2024-06-10 11:44:48.117288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x742d70 with addr=10.0.0.2, port=4420 00:36:23.199 [2024-06-10 11:44:48.117302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742d70 is same with the state(5) to be set 00:36:23.199 [2024-06-10 11:44:48.117321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x742d70 (9): Bad file descriptor 00:36:23.199 [2024-06-10 11:44:48.117338] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:23.199 [2024-06-10 11:44:48.117350] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:23.199 [2024-06-10 11:44:48.117362] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:23.199 [2024-06-10 11:44:48.117388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:36:23.199 [2024-06-10 11:44:48.126994] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:23.199 [2024-06-10 11:44:48.127325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.199 [2024-06-10 11:44:48.127344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x742d70 with addr=10.0.0.2, port=4420 00:36:23.199 [2024-06-10 11:44:48.127357] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742d70 is same with the state(5) to be set 00:36:23.199 [2024-06-10 11:44:48.127375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x742d70 (9): Bad file descriptor 00:36:23.199 [2024-06-10 11:44:48.127391] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:23.199 [2024-06-10 11:44:48.127403] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:23.199 [2024-06-10 11:44:48.127415] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:23.199 [2024-06-10 11:44:48.127444] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:23.199 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:23.199 [2024-06-10 11:44:48.137424] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:23.199 [2024-06-10 11:44:48.137790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.199 [2024-06-10 11:44:48.137811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x742d70 with addr=10.0.0.2, port=4420 00:36:23.199 [2024-06-10 11:44:48.137824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742d70 is same with the state(5) to be set 00:36:23.199 [2024-06-10 11:44:48.137853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x742d70 (9): Bad file descriptor 00:36:23.199 [2024-06-10 11:44:48.137871] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:23.199 [2024-06-10 11:44:48.137882] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:23.199 [2024-06-10 11:44:48.137895] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:23.199 [2024-06-10 11:44:48.137911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.199 [2024-06-10 11:44:48.147489] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:36:23.199 [2024-06-10 11:44:48.147826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:23.199 [2024-06-10 11:44:48.147846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x742d70 with addr=10.0.0.2, port=4420 00:36:23.199 [2024-06-10 11:44:48.147859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742d70 is same with the state(5) to be set 00:36:23.199 [2024-06-10 11:44:48.147886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x742d70 (9): Bad file descriptor 00:36:23.199 [2024-06-10 11:44:48.147903] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:23.199 [2024-06-10 11:44:48.147915] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:36:23.199 [2024-06-10 11:44:48.147927] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:23.199 [2024-06-10 11:44:48.147943] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:23.199 [2024-06-10 11:44:48.152556] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:36:23.200 [2024-06-10 11:44:48.152583] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.200 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:23.460 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.461 11:44:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:24.839 [2024-06-10 11:44:49.512325] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:36:24.839 [2024-06-10 11:44:49.512346] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:36:24.839 [2024-06-10 11:44:49.512362] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:36:24.839 [2024-06-10 11:44:49.641782] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:36:25.099 [2024-06-10 11:44:49.952519] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:36:25.099 [2024-06-10 11:44:49.952553] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:36:25.099 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:25.099 11:44:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:25.099 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:36:25.099 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:25.099 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:25.099 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:25.099 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:25.099 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:25.100 request: 00:36:25.100 { 00:36:25.100 "name": "nvme", 00:36:25.100 "trtype": "tcp", 00:36:25.100 "traddr": "10.0.0.2", 00:36:25.100 "hostnqn": "nqn.2021-12.io.spdk:test", 00:36:25.100 "adrfam": "ipv4", 00:36:25.100 "trsvcid": "8009", 00:36:25.100 "wait_for_attach": true, 00:36:25.100 "method": "bdev_nvme_start_discovery", 00:36:25.100 "req_id": 1 00:36:25.100 } 00:36:25.100 Got JSON-RPC error response 00:36:25.100 response: 00:36:25.100 { 00:36:25.100 "code": -17, 00:36:25.100 "message": "File exists" 00:36:25.100 } 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:36:25.100 11:44:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:25.100 request: 00:36:25.100 { 00:36:25.100 "name": "nvme_second", 00:36:25.100 "trtype": "tcp", 00:36:25.100 "traddr": "10.0.0.2", 00:36:25.100 "hostnqn": "nqn.2021-12.io.spdk:test", 00:36:25.100 "adrfam": "ipv4", 00:36:25.100 "trsvcid": "8009", 00:36:25.100 "wait_for_attach": true, 00:36:25.100 "method": "bdev_nvme_start_discovery", 00:36:25.100 "req_id": 1 00:36:25.100 } 00:36:25.100 Got JSON-RPC error response 00:36:25.100 response: 00:36:25.100 { 00:36:25.100 "code": -17, 00:36:25.100 "message": "File exists" 00:36:25.100 } 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:36:25.100 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:25.359 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:36:25.359 11:44:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:36:25.359 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:36:25.359 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:36:25.359 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:25.359 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:25.360 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:25.360 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:25.360 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:36:25.360 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.360 11:44:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:26.297 [2024-06-10 11:44:51.220236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.297 [2024-06-10 11:44:51.220279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75d480 with addr=10.0.0.2, port=8010 00:36:26.297 [2024-06-10 11:44:51.220304] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:26.297 [2024-06-10 11:44:51.220317] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:26.297 [2024-06-10 11:44:51.220329] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:36:27.233 [2024-06-10 11:44:52.222638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.233 [2024-06-10 11:44:52.222679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75d480 with addr=10.0.0.2, port=8010 00:36:27.233 [2024-06-10 11:44:52.222704] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:27.233 [2024-06-10 11:44:52.222716] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:27.233 [2024-06-10 11:44:52.222727] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:36:28.170 [2024-06-10 11:44:53.224695] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:36:28.170 request: 00:36:28.170 { 00:36:28.170 "name": "nvme_second", 00:36:28.170 "trtype": "tcp", 00:36:28.170 "traddr": "10.0.0.2", 00:36:28.170 "hostnqn": "nqn.2021-12.io.spdk:test", 00:36:28.170 "adrfam": "ipv4", 00:36:28.170 "trsvcid": "8010", 00:36:28.170 "attach_timeout_ms": 3000, 00:36:28.170 "method": "bdev_nvme_start_discovery", 00:36:28.170 "req_id": 1 00:36:28.170 } 00:36:28.170 Got JSON-RPC error response 00:36:28.170 response: 00:36:28.170 { 00:36:28.170 "code": -110, 00:36:28.170 "message": "Connection timed out" 00:36:28.170 } 00:36:28.170 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:28.170 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:36:28.170 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:28.171 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:28.171 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:28.171 11:44:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:36:28.171 11:44:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:36:28.171 11:44:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:36:28.171 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:28.171 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:28.171 11:44:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:36:28.171 11:44:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:36:28.171 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4109807 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:28.430 rmmod nvme_tcp 00:36:28.430 rmmod nvme_fabrics 00:36:28.430 rmmod nvme_keyring 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 4109733 ']' 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 4109733 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 4109733 ']' 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 4109733 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4109733 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4109733' 00:36:28.430 killing process with pid 4109733 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 4109733 00:36:28.430 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 4109733 00:36:28.689 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:28.689 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:28.689 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:28.689 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:28.689 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:28.689 11:44:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.689 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:28.689 11:44:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:30.622 11:44:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:30.622 00:36:30.622 real 0m21.701s 00:36:30.622 user 0m23.776s 00:36:30.622 sys 0m8.966s 00:36:30.622 11:44:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:30.622 11:44:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:36:30.622 ************************************ 00:36:30.622 END TEST nvmf_host_discovery 00:36:30.622 ************************************ 00:36:30.882 11:44:55 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:36:30.882 11:44:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:30.882 11:44:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:30.882 11:44:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:30.882 ************************************ 00:36:30.882 START TEST nvmf_host_multipath_status 00:36:30.882 ************************************ 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:36:30.882 * Looking for test storage... 00:36:30.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:36:30.882 11:44:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:40.864 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:40.864 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:40.864 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:40.865 Found net devices under 0000:af:00.0: cvl_0_0 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:40.865 Found net devices under 0000:af:00.1: cvl_0_1 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:40.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:40.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:36:40.865 00:36:40.865 --- 10.0.0.2 ping statistics --- 00:36:40.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.865 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:40.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:40.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:36:40.865 00:36:40.865 --- 10.0.0.1 ping statistics --- 00:36:40.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.865 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=4116317 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 4116317 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 4116317 ']' 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:40.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:40.865 11:45:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:40.865 [2024-06-10 11:45:04.705403] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:36:40.865 [2024-06-10 11:45:04.705462] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:40.865 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.865 [2024-06-10 11:45:04.831139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:40.865 [2024-06-10 11:45:04.915229] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:40.865 [2024-06-10 11:45:04.915277] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:40.865 [2024-06-10 11:45:04.915291] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:40.865 [2024-06-10 11:45:04.915303] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:40.865 [2024-06-10 11:45:04.915313] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:40.865 [2024-06-10 11:45:04.915370] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:36:40.865 [2024-06-10 11:45:04.915374] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:40.865 11:45:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:40.865 11:45:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:36:40.865 11:45:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:40.865 11:45:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:40.865 11:45:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:40.865 11:45:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:40.865 11:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4116317 00:36:40.865 11:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:40.865 [2024-06-10 11:45:05.872837] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:40.865 11:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:36:41.124 Malloc0 00:36:41.124 11:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:36:41.383 11:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:41.641 11:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:41.900 [2024-06-10 11:45:06.782546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:41.900 11:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:42.159 [2024-06-10 11:45:07.007203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:42.159 11:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4117024 00:36:42.159 11:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:36:42.159 11:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:42.159 11:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4117024 /var/tmp/bdevperf.sock 00:36:42.159 11:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 4117024 ']' 00:36:42.159 11:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:42.159 11:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:42.159 11:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:42.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:42.159 11:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:42.159 11:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:36:43.096 11:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:43.096 11:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:36:43.096 11:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:36:43.355 11:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:36:43.614 Nvme0n1 00:36:43.614 11:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:36:43.872 Nvme0n1 00:36:43.872 11:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:36:43.872 11:45:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:36:46.405 11:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:36:46.405 11:45:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:36:46.405 11:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:46.405 11:45:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:36:47.782 11:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:36:47.782 11:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:47.782 11:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:47.782 11:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:47.782 11:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:47.782 11:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:47.782 11:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:47.782 11:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:48.041 11:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:48.041 11:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:48.041 11:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:48.041 11:45:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:48.300 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:48.300 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:48.300 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:48.300 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:48.559 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:48.559 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:48.559 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:48.559 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:48.817 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:48.817 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:48.817 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:48.818 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:48.818 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:48.818 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:36:48.818 11:45:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:49.076 11:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:36:49.335 11:45:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:36:50.273 11:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:36:50.273 11:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:50.273 11:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:50.273 11:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:50.532 11:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:50.532 11:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:36:50.532 11:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:50.532 11:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:50.790 11:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:50.790 11:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:50.790 11:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:50.791 11:45:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:51.049 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:51.049 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:51.049 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:51.049 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:51.308 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:51.308 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:51.308 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:51.308 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:51.566 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:51.566 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:51.567 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:51.567 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:51.844 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:51.844 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:36:51.844 11:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:52.117 11:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:36:52.376 11:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:36:53.311 11:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:36:53.311 11:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:53.311 11:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:53.311 11:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:53.570 11:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:53.570 11:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:53.570 11:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:53.570 11:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:53.829 11:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:53.829 11:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:53.829 11:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:53.829 11:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:54.088 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:54.088 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:54.088 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:54.088 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:54.347 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:54.347 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:54.347 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:54.347 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:54.606 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:54.606 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:36:54.606 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:54.606 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:54.606 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:54.606 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:36:54.606 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:36:54.865 11:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:55.124 11:45:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:36:56.060 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:36:56.060 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:36:56.060 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:56.060 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:56.319 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:56.319 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:56.319 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:56.319 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:56.579 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:56.579 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:56.579 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:56.579 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:56.839 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:56.839 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:56.839 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:56.839 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:57.097 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:57.097 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:36:57.097 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:57.097 11:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:36:57.097 11:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:57.097 11:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:36:57.097 11:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:57.097 11:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:36:57.356 11:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:57.356 11:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:36:57.356 11:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:36:57.615 11:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:36:57.874 11:45:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:36:58.810 11:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:36:58.810 11:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:36:59.069 11:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:59.069 11:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:36:59.069 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:59.069 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:36:59.069 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:59.069 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:36:59.327 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:36:59.328 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:36:59.328 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:59.328 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:36:59.586 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:59.586 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:36:59.586 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:59.586 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:36:59.845 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:36:59.845 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:36:59.845 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:36:59.845 11:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:00.104 11:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:00.104 11:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:37:00.104 11:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:00.104 11:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:00.362 11:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:00.363 11:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:37:00.363 11:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:37:00.621 11:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:00.880 11:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:37:01.816 11:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:37:01.816 11:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:37:01.816 11:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:01.816 11:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:02.074 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:02.074 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:37:02.074 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:02.074 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:02.333 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:02.333 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:02.333 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:02.333 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:02.591 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:02.591 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:02.591 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:02.591 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:02.850 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:02.850 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:37:02.850 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:02.850 11:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:03.109 11:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:03.109 11:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:03.109 11:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:03.109 11:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:03.367 11:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:03.367 11:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:37:03.626 11:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:37:03.626 11:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:37:03.884 11:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:04.143 11:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:37:05.081 11:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:37:05.081 11:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:05.081 11:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:05.081 11:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:05.340 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:05.340 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:37:05.340 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:05.340 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:05.599 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:05.599 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:05.599 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:05.599 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:05.859 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:05.859 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:05.859 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:05.859 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:05.859 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:06.118 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:06.118 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:06.118 11:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:06.118 11:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:06.118 11:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:06.118 11:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:06.118 11:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:06.377 11:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:06.377 11:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:37:06.377 11:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:06.634 11:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:06.892 11:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:37:07.826 11:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:37:07.826 11:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:37:07.826 11:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:07.826 11:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:08.085 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:08.085 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:37:08.085 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:08.085 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:08.345 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:08.345 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:08.345 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:08.345 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:08.604 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:08.604 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:08.604 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:08.604 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:08.862 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:08.862 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:08.862 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:08.863 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:08.863 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:08.863 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:08.863 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:08.863 11:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:09.121 11:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:09.121 11:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:37:09.121 11:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:09.380 11:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:37:09.639 11:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:37:10.573 11:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:37:10.573 11:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:10.573 11:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:10.573 11:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:10.830 11:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:10.830 11:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:37:10.830 11:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:10.830 11:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:11.087 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:11.087 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:11.087 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:11.087 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:11.344 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:11.344 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:11.344 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:11.344 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:11.601 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:11.601 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:11.601 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:11.601 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:11.859 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:11.859 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:11.859 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:11.859 11:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:12.117 11:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:12.117 11:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:37:12.117 11:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:12.376 11:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:37:12.634 11:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:37:13.570 11:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:37:13.570 11:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:13.570 11:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:13.570 11:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:13.829 11:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:13.829 11:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:37:13.829 11:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:13.829 11:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:14.088 11:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:14.088 11:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:14.088 11:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:14.088 11:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:14.347 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:14.347 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:14.347 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:14.347 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:14.606 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:14.606 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:14.606 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:14.606 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:14.606 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:14.606 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:37:14.606 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:14.606 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:14.865 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:14.865 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4117024 00:37:14.865 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 4117024 ']' 00:37:14.865 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 4117024 00:37:14.865 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:37:14.865 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:14.865 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4117024 00:37:15.127 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:37:15.127 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:37:15.127 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4117024' 00:37:15.127 killing process with pid 4117024 00:37:15.127 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 4117024 00:37:15.127 11:45:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 4117024 00:37:15.127 Connection closed with partial response: 00:37:15.127 00:37:15.127 00:37:15.127 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4117024 00:37:15.127 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:15.127 [2024-06-10 11:45:07.074311] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:37:15.127 [2024-06-10 11:45:07.074385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4117024 ] 00:37:15.127 EAL: No free 2048 kB hugepages reported on node 1 00:37:15.127 [2024-06-10 11:45:07.169609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.127 [2024-06-10 11:45:07.243086] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:37:15.127 Running I/O for 90 seconds... 00:37:15.127 [2024-06-10 11:45:22.646480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.127 [2024-06-10 11:45:22.646523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.646697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.127 [2024-06-10 11:45:22.646711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.646730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.127 [2024-06-10 11:45:22.646740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.646757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.127 [2024-06-10 11:45:22.646768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.646783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.127 [2024-06-10 11:45:22.646792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.646807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.127 [2024-06-10 11:45:22.646816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.646830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.127 [2024-06-10 11:45:22.646841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.646857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.127 [2024-06-10 11:45:22.646867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.646881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.127 [2024-06-10 11:45:22.646891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.646906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.127 [2024-06-10 11:45:22.646917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.646932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.127 [2024-06-10 11:45:22.646948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.646963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.127 [2024-06-10 11:45:22.646973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.646988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.127 [2024-06-10 11:45:22.646998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.647012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.127 [2024-06-10 11:45:22.647021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.647036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.127 [2024-06-10 11:45:22.647046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.647061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.127 [2024-06-10 11:45:22.647070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:15.127 [2024-06-10 11:45:22.647085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.128 [2024-06-10 11:45:22.647095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.647109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.128 [2024-06-10 11:45:22.647118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.647133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.128 [2024-06-10 11:45:22.647142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.647157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.647166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.647180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.647189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.647203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.647212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.647226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.647236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.648990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.648999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.649016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.649025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.649043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.649052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.649068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.649077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.649094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.128 [2024-06-10 11:45:22.649103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:15.128 [2024-06-10 11:45:22.649120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.128 [2024-06-10 11:45:22.649129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.129 [2024-06-10 11:45:22.649742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.649984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.649995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:15.129 [2024-06-10 11:45:22.650656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.129 [2024-06-10 11:45:22.650665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.650685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:22.650694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.650715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:22.650726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.650745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:22.650755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.650774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:22.650784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.650803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:22.650812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.650831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:22.650840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.650860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:22.650870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.650889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:22.650898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.650918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:22.650928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.650947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.650956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.650976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.650985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:22.651381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:22.651390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:37.509099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:37.509144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:37.509169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:37.509194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:37.509217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:37.509434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.130 [2024-06-10 11:45:37.509461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:37.509487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:37.509512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:37.509537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:37.509562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:37.509592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:37.509632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:15.130 [2024-06-10 11:45:37.509653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.130 [2024-06-10 11:45:37.509663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.131 [2024-06-10 11:45:37.510566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.131 [2024-06-10 11:45:37.510601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.131 [2024-06-10 11:45:37.510625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.131 [2024-06-10 11:45:37.510651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.131 [2024-06-10 11:45:37.510675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.131 [2024-06-10 11:45:37.510700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.131 [2024-06-10 11:45:37.510735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.131 [2024-06-10 11:45:37.510759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.131 [2024-06-10 11:45:37.510784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.131 [2024-06-10 11:45:37.510808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.131 [2024-06-10 11:45:37.510832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.131 [2024-06-10 11:45:37.510859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.131 [2024-06-10 11:45:37.510883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.131 [2024-06-10 11:45:37.510908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.131 [2024-06-10 11:45:37.510933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.131 [2024-06-10 11:45:37.510958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.131 [2024-06-10 11:45:37.510981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.510995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:15.131 [2024-06-10 11:45:37.511005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.511159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.131 [2024-06-10 11:45:37.511170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.511185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.131 [2024-06-10 11:45:37.511194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:15.131 [2024-06-10 11:45:37.511209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:15.131 [2024-06-10 11:45:37.511218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:15.131 Received shutdown signal, test time was about 30.854003 seconds 00:37:15.131 00:37:15.131 Latency(us) 00:37:15.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.131 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:37:15.131 Verification LBA range: start 0x0 length 0x4000 00:37:15.131 Nvme0n1 : 30.85 8676.43 33.89 0.00 0.00 14733.53 365.36 4026531.84 00:37:15.131 =================================================================================================================== 00:37:15.131 Total : 8676.43 33.89 0.00 0.00 14733.53 365.36 4026531.84 00:37:15.131 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:15.390 rmmod nvme_tcp 00:37:15.390 rmmod nvme_fabrics 00:37:15.390 rmmod nvme_keyring 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 4116317 ']' 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 4116317 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 4116317 ']' 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 4116317 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:37:15.390 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:15.649 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4116317 00:37:15.649 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:15.649 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:15.649 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4116317' 00:37:15.649 killing process with pid 4116317 00:37:15.649 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 4116317 00:37:15.649 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 4116317 00:37:15.945 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:15.945 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:15.945 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:15.945 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:15.945 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:15.945 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:15.945 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:15.945 11:45:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:17.884 11:45:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:17.884 00:37:17.884 real 0m47.062s 00:37:17.884 user 1m59.664s 00:37:17.884 sys 0m17.928s 00:37:17.884 11:45:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:17.884 11:45:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:17.884 ************************************ 00:37:17.884 END TEST nvmf_host_multipath_status 00:37:17.884 ************************************ 00:37:17.884 11:45:42 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:37:17.884 11:45:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:17.884 11:45:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:17.884 11:45:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:17.884 ************************************ 00:37:17.884 START TEST nvmf_discovery_remove_ifc 00:37:17.884 ************************************ 00:37:17.884 11:45:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:37:18.144 * Looking for test storage... 00:37:18.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:18.144 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:18.145 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.145 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:18.145 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:18.145 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:18.145 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:18.145 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:37:18.145 11:45:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:26.270 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:26.270 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:26.270 Found net devices under 0000:af:00.0: cvl_0_0 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:26.270 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:26.271 Found net devices under 0000:af:00.1: cvl_0_1 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:26.271 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:26.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:26.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:37:26.531 00:37:26.531 --- 10.0.0.2 ping statistics --- 00:37:26.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:26.531 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:26.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:26.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:37:26.531 00:37:26.531 --- 10.0.0.1 ping statistics --- 00:37:26.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:26.531 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:26.531 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:26.791 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:37:26.791 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:26.791 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:26.791 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:26.791 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=4127060 00:37:26.791 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:26.791 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 4127060 00:37:26.791 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 4127060 ']' 00:37:26.791 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:26.791 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:26.791 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:26.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:26.791 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:26.791 11:45:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:26.791 [2024-06-10 11:45:51.717940] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:37:26.791 [2024-06-10 11:45:51.718001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:26.791 EAL: No free 2048 kB hugepages reported on node 1 00:37:26.791 [2024-06-10 11:45:51.835281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.050 [2024-06-10 11:45:51.920203] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:27.051 [2024-06-10 11:45:51.920243] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:27.051 [2024-06-10 11:45:51.920257] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:27.051 [2024-06-10 11:45:51.920269] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:27.051 [2024-06-10 11:45:51.920282] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:27.051 [2024-06-10 11:45:51.920307] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.619 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:27.619 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:37:27.619 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:27.619 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:27.619 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:27.619 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:27.619 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:37:27.619 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:27.619 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:27.619 [2024-06-10 11:45:52.680276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:27.619 [2024-06-10 11:45:52.688442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:37:27.619 null0 00:37:27.619 [2024-06-10 11:45:52.720454] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:27.878 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:27.878 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4127339 00:37:27.878 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:37:27.878 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4127339 /tmp/host.sock 00:37:27.878 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 4127339 ']' 00:37:27.878 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:37:27.878 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:27.878 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:37:27.878 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:37:27.878 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:27.878 11:45:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:27.878 [2024-06-10 11:45:52.792165] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:37:27.878 [2024-06-10 11:45:52.792223] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127339 ] 00:37:27.878 EAL: No free 2048 kB hugepages reported on node 1 00:37:27.878 [2024-06-10 11:45:52.913760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.138 [2024-06-10 11:45:53.000230] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:28.706 11:45:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:30.085 [2024-06-10 11:45:54.840793] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:30.085 [2024-06-10 11:45:54.840830] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:30.085 [2024-06-10 11:45:54.840851] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:30.085 [2024-06-10 11:45:54.927136] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:37:30.085 [2024-06-10 11:45:55.151518] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:37:30.085 [2024-06-10 11:45:55.151572] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:37:30.085 [2024-06-10 11:45:55.151606] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:37:30.085 [2024-06-10 11:45:55.151626] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:37:30.085 [2024-06-10 11:45:55.151649] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:30.085 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:30.085 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:37:30.085 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:30.085 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:30.085 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:30.085 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:30.085 [2024-06-10 11:45:55.158974] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e733a0 was disconnected and freed. delete nvme_qpair. 00:37:30.085 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:30.085 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:30.085 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:30.085 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:30.344 11:45:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:31.723 11:45:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:31.723 11:45:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:31.723 11:45:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:31.723 11:45:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:31.723 11:45:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:31.723 11:45:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:31.723 11:45:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:31.723 11:45:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:31.723 11:45:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:31.723 11:45:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:32.659 11:45:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:32.659 11:45:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:32.659 11:45:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:32.659 11:45:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:32.659 11:45:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:32.659 11:45:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:32.659 11:45:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:32.659 11:45:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:32.659 11:45:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:32.659 11:45:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:33.598 11:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:33.598 11:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:33.598 11:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:33.598 11:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:33.598 11:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:33.598 11:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:33.598 11:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:33.598 11:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:33.598 11:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:33.598 11:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:34.535 11:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:34.535 11:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:34.535 11:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:34.535 11:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:34.535 11:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:34.535 11:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:34.535 11:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:34.535 11:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:34.535 11:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:34.535 11:45:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:35.915 [2024-06-10 11:46:00.602204] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:37:35.915 [2024-06-10 11:46:00.602261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:35.915 [2024-06-10 11:46:00.602279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.915 [2024-06-10 11:46:00.602295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:35.915 [2024-06-10 11:46:00.602308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.915 [2024-06-10 11:46:00.602323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:35.915 [2024-06-10 11:46:00.602335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.915 [2024-06-10 11:46:00.602349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:35.915 [2024-06-10 11:46:00.602362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.915 [2024-06-10 11:46:00.602376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:37:35.915 [2024-06-10 11:46:00.602388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:35.915 [2024-06-10 11:46:00.602401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a510 is same with the state(5) to be set 00:37:35.915 [2024-06-10 11:46:00.612223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a510 (9): Bad file descriptor 00:37:35.915 [2024-06-10 11:46:00.622269] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:35.915 11:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:35.915 11:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:35.915 11:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:35.915 11:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.915 11:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:35.915 11:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:35.915 11:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:36.852 [2024-06-10 11:46:01.677677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:37:36.852 [2024-06-10 11:46:01.677765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3a510 with addr=10.0.0.2, port=4420 00:37:36.852 [2024-06-10 11:46:01.677805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3a510 is same with the state(5) to be set 00:37:36.852 [2024-06-10 11:46:01.677869] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3a510 (9): Bad file descriptor 00:37:36.852 [2024-06-10 11:46:01.678781] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:37:36.852 [2024-06-10 11:46:01.678839] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:36.852 [2024-06-10 11:46:01.678871] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:36.852 [2024-06-10 11:46:01.678904] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:36.852 [2024-06-10 11:46:01.678952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:36.852 [2024-06-10 11:46:01.678983] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:36.852 11:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:36.852 11:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:37:36.852 11:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:37.789 [2024-06-10 11:46:02.681490] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:37.789 [2024-06-10 11:46:02.681533] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:37:37.789 [2024-06-10 11:46:02.681562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:37.789 [2024-06-10 11:46:02.681583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:37.789 [2024-06-10 11:46:02.681599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:37.789 [2024-06-10 11:46:02.681612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:37.789 [2024-06-10 11:46:02.681626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:37.789 [2024-06-10 11:46:02.681639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:37.789 [2024-06-10 11:46:02.681652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:37.789 [2024-06-10 11:46:02.681665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:37.789 [2024-06-10 11:46:02.681678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:37:37.789 [2024-06-10 11:46:02.681691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:37.789 [2024-06-10 11:46:02.681705] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:37:37.789 [2024-06-10 11:46:02.682463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e399a0 (9): Bad file descriptor 00:37:37.789 [2024-06-10 11:46:02.683477] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:37:37.789 [2024-06-10 11:46:02.683495] nvme_ctrlr.c:1203:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:37.789 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:38.048 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:37:38.048 11:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:38.985 11:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:38.985 11:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:38.985 11:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:38.985 11:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:38.985 11:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:38.985 11:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:38.985 11:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:38.985 11:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:38.985 11:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:37:38.985 11:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:37:39.922 [2024-06-10 11:46:04.735774] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:39.922 [2024-06-10 11:46:04.735796] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:39.922 [2024-06-10 11:46:04.735814] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:39.922 [2024-06-10 11:46:04.822100] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:37:39.922 [2024-06-10 11:46:04.886092] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:37:39.922 [2024-06-10 11:46:04.886132] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:37:39.922 [2024-06-10 11:46:04.886156] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:37:39.922 [2024-06-10 11:46:04.886174] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:37:39.922 [2024-06-10 11:46:04.886186] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:39.922 [2024-06-10 11:46:04.894296] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e7dca0 was disconnected and freed. delete nvme_qpair. 00:37:39.922 11:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:37:39.922 11:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:39.922 11:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:37:39.922 11:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:39.922 11:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:39.922 11:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:37:39.922 11:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:37:39.922 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:40.181 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:37:40.181 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:37:40.181 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4127339 00:37:40.181 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 4127339 ']' 00:37:40.181 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 4127339 00:37:40.182 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:37:40.182 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:40.182 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4127339 00:37:40.182 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:40.182 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:40.182 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4127339' 00:37:40.182 killing process with pid 4127339 00:37:40.182 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 4127339 00:37:40.182 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 4127339 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:40.441 rmmod nvme_tcp 00:37:40.441 rmmod nvme_fabrics 00:37:40.441 rmmod nvme_keyring 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 4127060 ']' 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 4127060 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 4127060 ']' 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 4127060 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4127060 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4127060' 00:37:40.441 killing process with pid 4127060 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 4127060 00:37:40.441 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 4127060 00:37:40.719 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:40.719 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:40.719 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:40.719 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:40.719 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:40.719 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:40.719 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:40.719 11:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:42.627 11:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:42.627 00:37:42.627 real 0m24.771s 00:37:42.627 user 0m27.561s 00:37:42.627 sys 0m8.798s 00:37:42.627 11:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:42.627 11:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:37:42.627 ************************************ 00:37:42.627 END TEST nvmf_discovery_remove_ifc 00:37:42.627 ************************************ 00:37:42.886 11:46:07 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:37:42.886 11:46:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:42.886 11:46:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:42.886 11:46:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:42.886 ************************************ 00:37:42.886 START TEST nvmf_identify_kernel_target 00:37:42.886 ************************************ 00:37:42.886 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:37:42.886 * Looking for test storage... 00:37:42.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:42.886 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:42.886 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:37:42.886 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:42.886 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:42.886 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:42.886 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:42.886 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:37:42.887 11:46:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:52.872 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:52.873 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:52.873 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:52.873 Found net devices under 0000:af:00.0: cvl_0_0 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:52.873 Found net devices under 0000:af:00.1: cvl_0_1 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:52.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:52.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:37:52.873 00:37:52.873 --- 10.0.0.2 ping statistics --- 00:37:52.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:52.873 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:52.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:52.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:37:52.873 00:37:52.873 --- 10.0.0.1 ping statistics --- 00:37:52.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:52.873 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:52.873 11:46:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:55.411 Waiting for block devices as requested 00:37:55.411 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:55.719 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:55.719 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:55.719 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:56.002 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:56.002 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:56.002 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:56.278 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:56.278 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:56.278 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:56.537 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:56.537 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:56.537 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:56.797 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:56.797 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:56.797 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:57.056 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:57.056 No valid GPT data, bailing 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:57.056 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:57.316 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:57.316 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:37:57.316 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:37:57.316 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:37:57.316 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:37:57.316 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:37:57.316 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:37:57.316 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:37:57.316 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:57.316 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:37:57.316 00:37:57.316 Discovery Log Number of Records 2, Generation counter 2 00:37:57.316 =====Discovery Log Entry 0====== 00:37:57.316 trtype: tcp 00:37:57.316 adrfam: ipv4 00:37:57.316 subtype: current discovery subsystem 00:37:57.316 treq: not specified, sq flow control disable supported 00:37:57.316 portid: 1 00:37:57.316 trsvcid: 4420 00:37:57.316 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:57.316 traddr: 10.0.0.1 00:37:57.316 eflags: none 00:37:57.316 sectype: none 00:37:57.316 =====Discovery Log Entry 1====== 00:37:57.316 trtype: tcp 00:37:57.316 adrfam: ipv4 00:37:57.316 subtype: nvme subsystem 00:37:57.316 treq: not specified, sq flow control disable supported 00:37:57.316 portid: 1 00:37:57.316 trsvcid: 4420 00:37:57.316 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:57.316 traddr: 10.0.0.1 00:37:57.316 eflags: none 00:37:57.316 sectype: none 00:37:57.316 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:37:57.316 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:37:57.316 EAL: No free 2048 kB hugepages reported on node 1 00:37:57.316 ===================================================== 00:37:57.316 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:37:57.316 ===================================================== 00:37:57.316 Controller Capabilities/Features 00:37:57.316 ================================ 00:37:57.316 Vendor ID: 0000 00:37:57.316 Subsystem Vendor ID: 0000 00:37:57.316 Serial Number: c9c48888b81262b0dbfa 00:37:57.316 Model Number: Linux 00:37:57.316 Firmware Version: 6.7.0-68 00:37:57.316 Recommended Arb Burst: 0 00:37:57.316 IEEE OUI Identifier: 00 00 00 00:37:57.316 Multi-path I/O 00:37:57.316 May have multiple subsystem ports: No 00:37:57.316 May have multiple controllers: No 00:37:57.316 Associated with SR-IOV VF: No 00:37:57.316 Max Data Transfer Size: Unlimited 00:37:57.316 Max Number of Namespaces: 0 00:37:57.316 Max Number of I/O Queues: 1024 00:37:57.316 NVMe Specification Version (VS): 1.3 00:37:57.316 NVMe Specification Version (Identify): 1.3 00:37:57.316 Maximum Queue Entries: 1024 00:37:57.316 Contiguous Queues Required: No 00:37:57.316 Arbitration Mechanisms Supported 00:37:57.316 Weighted Round Robin: Not Supported 00:37:57.316 Vendor Specific: Not Supported 00:37:57.316 Reset Timeout: 7500 ms 00:37:57.316 Doorbell Stride: 4 bytes 00:37:57.316 NVM Subsystem Reset: Not Supported 00:37:57.316 Command Sets Supported 00:37:57.316 NVM Command Set: Supported 00:37:57.316 Boot Partition: Not Supported 00:37:57.316 Memory Page Size Minimum: 4096 bytes 00:37:57.316 Memory Page Size Maximum: 4096 bytes 00:37:57.316 Persistent Memory Region: Not Supported 00:37:57.316 Optional Asynchronous Events Supported 00:37:57.316 Namespace Attribute Notices: Not Supported 00:37:57.316 Firmware Activation Notices: Not Supported 00:37:57.317 ANA Change Notices: Not Supported 00:37:57.317 PLE Aggregate Log Change Notices: Not Supported 00:37:57.317 LBA Status Info Alert Notices: Not Supported 00:37:57.317 EGE Aggregate Log Change Notices: Not Supported 00:37:57.317 Normal NVM Subsystem Shutdown event: Not Supported 00:37:57.317 Zone Descriptor Change Notices: Not Supported 00:37:57.317 Discovery Log Change Notices: Supported 00:37:57.317 Controller Attributes 00:37:57.317 128-bit Host Identifier: Not Supported 00:37:57.317 Non-Operational Permissive Mode: Not Supported 00:37:57.317 NVM Sets: Not Supported 00:37:57.317 Read Recovery Levels: Not Supported 00:37:57.317 Endurance Groups: Not Supported 00:37:57.317 Predictable Latency Mode: Not Supported 00:37:57.317 Traffic Based Keep ALive: Not Supported 00:37:57.317 Namespace Granularity: Not Supported 00:37:57.317 SQ Associations: Not Supported 00:37:57.317 UUID List: Not Supported 00:37:57.317 Multi-Domain Subsystem: Not Supported 00:37:57.317 Fixed Capacity Management: Not Supported 00:37:57.317 Variable Capacity Management: Not Supported 00:37:57.317 Delete Endurance Group: Not Supported 00:37:57.317 Delete NVM Set: Not Supported 00:37:57.317 Extended LBA Formats Supported: Not Supported 00:37:57.317 Flexible Data Placement Supported: Not Supported 00:37:57.317 00:37:57.317 Controller Memory Buffer Support 00:37:57.317 ================================ 00:37:57.317 Supported: No 00:37:57.317 00:37:57.317 Persistent Memory Region Support 00:37:57.317 ================================ 00:37:57.317 Supported: No 00:37:57.317 00:37:57.317 Admin Command Set Attributes 00:37:57.317 ============================ 00:37:57.317 Security Send/Receive: Not Supported 00:37:57.317 Format NVM: Not Supported 00:37:57.317 Firmware Activate/Download: Not Supported 00:37:57.317 Namespace Management: Not Supported 00:37:57.317 Device Self-Test: Not Supported 00:37:57.317 Directives: Not Supported 00:37:57.317 NVMe-MI: Not Supported 00:37:57.317 Virtualization Management: Not Supported 00:37:57.317 Doorbell Buffer Config: Not Supported 00:37:57.317 Get LBA Status Capability: Not Supported 00:37:57.317 Command & Feature Lockdown Capability: Not Supported 00:37:57.317 Abort Command Limit: 1 00:37:57.317 Async Event Request Limit: 1 00:37:57.317 Number of Firmware Slots: N/A 00:37:57.317 Firmware Slot 1 Read-Only: N/A 00:37:57.317 Firmware Activation Without Reset: N/A 00:37:57.317 Multiple Update Detection Support: N/A 00:37:57.317 Firmware Update Granularity: No Information Provided 00:37:57.317 Per-Namespace SMART Log: No 00:37:57.317 Asymmetric Namespace Access Log Page: Not Supported 00:37:57.317 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:37:57.317 Command Effects Log Page: Not Supported 00:37:57.317 Get Log Page Extended Data: Supported 00:37:57.317 Telemetry Log Pages: Not Supported 00:37:57.317 Persistent Event Log Pages: Not Supported 00:37:57.317 Supported Log Pages Log Page: May Support 00:37:57.317 Commands Supported & Effects Log Page: Not Supported 00:37:57.317 Feature Identifiers & Effects Log Page:May Support 00:37:57.317 NVMe-MI Commands & Effects Log Page: May Support 00:37:57.317 Data Area 4 for Telemetry Log: Not Supported 00:37:57.317 Error Log Page Entries Supported: 1 00:37:57.317 Keep Alive: Not Supported 00:37:57.317 00:37:57.317 NVM Command Set Attributes 00:37:57.317 ========================== 00:37:57.317 Submission Queue Entry Size 00:37:57.317 Max: 1 00:37:57.317 Min: 1 00:37:57.317 Completion Queue Entry Size 00:37:57.317 Max: 1 00:37:57.317 Min: 1 00:37:57.317 Number of Namespaces: 0 00:37:57.317 Compare Command: Not Supported 00:37:57.317 Write Uncorrectable Command: Not Supported 00:37:57.317 Dataset Management Command: Not Supported 00:37:57.317 Write Zeroes Command: Not Supported 00:37:57.317 Set Features Save Field: Not Supported 00:37:57.317 Reservations: Not Supported 00:37:57.317 Timestamp: Not Supported 00:37:57.317 Copy: Not Supported 00:37:57.317 Volatile Write Cache: Not Present 00:37:57.317 Atomic Write Unit (Normal): 1 00:37:57.317 Atomic Write Unit (PFail): 1 00:37:57.317 Atomic Compare & Write Unit: 1 00:37:57.317 Fused Compare & Write: Not Supported 00:37:57.317 Scatter-Gather List 00:37:57.317 SGL Command Set: Supported 00:37:57.317 SGL Keyed: Not Supported 00:37:57.317 SGL Bit Bucket Descriptor: Not Supported 00:37:57.317 SGL Metadata Pointer: Not Supported 00:37:57.317 Oversized SGL: Not Supported 00:37:57.317 SGL Metadata Address: Not Supported 00:37:57.317 SGL Offset: Supported 00:37:57.317 Transport SGL Data Block: Not Supported 00:37:57.317 Replay Protected Memory Block: Not Supported 00:37:57.317 00:37:57.317 Firmware Slot Information 00:37:57.317 ========================= 00:37:57.317 Active slot: 0 00:37:57.317 00:37:57.317 00:37:57.317 Error Log 00:37:57.317 ========= 00:37:57.317 00:37:57.317 Active Namespaces 00:37:57.317 ================= 00:37:57.317 Discovery Log Page 00:37:57.317 ================== 00:37:57.317 Generation Counter: 2 00:37:57.317 Number of Records: 2 00:37:57.317 Record Format: 0 00:37:57.317 00:37:57.317 Discovery Log Entry 0 00:37:57.317 ---------------------- 00:37:57.317 Transport Type: 3 (TCP) 00:37:57.317 Address Family: 1 (IPv4) 00:37:57.317 Subsystem Type: 3 (Current Discovery Subsystem) 00:37:57.317 Entry Flags: 00:37:57.317 Duplicate Returned Information: 0 00:37:57.317 Explicit Persistent Connection Support for Discovery: 0 00:37:57.317 Transport Requirements: 00:37:57.317 Secure Channel: Not Specified 00:37:57.317 Port ID: 1 (0x0001) 00:37:57.317 Controller ID: 65535 (0xffff) 00:37:57.317 Admin Max SQ Size: 32 00:37:57.317 Transport Service Identifier: 4420 00:37:57.317 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:37:57.317 Transport Address: 10.0.0.1 00:37:57.317 Discovery Log Entry 1 00:37:57.317 ---------------------- 00:37:57.317 Transport Type: 3 (TCP) 00:37:57.317 Address Family: 1 (IPv4) 00:37:57.317 Subsystem Type: 2 (NVM Subsystem) 00:37:57.317 Entry Flags: 00:37:57.317 Duplicate Returned Information: 0 00:37:57.317 Explicit Persistent Connection Support for Discovery: 0 00:37:57.317 Transport Requirements: 00:37:57.317 Secure Channel: Not Specified 00:37:57.317 Port ID: 1 (0x0001) 00:37:57.317 Controller ID: 65535 (0xffff) 00:37:57.317 Admin Max SQ Size: 32 00:37:57.317 Transport Service Identifier: 4420 00:37:57.317 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:37:57.317 Transport Address: 10.0.0.1 00:37:57.317 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:57.317 EAL: No free 2048 kB hugepages reported on node 1 00:37:57.577 get_feature(0x01) failed 00:37:57.577 get_feature(0x02) failed 00:37:57.577 get_feature(0x04) failed 00:37:57.577 ===================================================== 00:37:57.577 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:57.577 ===================================================== 00:37:57.577 Controller Capabilities/Features 00:37:57.577 ================================ 00:37:57.577 Vendor ID: 0000 00:37:57.577 Subsystem Vendor ID: 0000 00:37:57.577 Serial Number: 36c7c148aad4d5bb0047 00:37:57.577 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:37:57.577 Firmware Version: 6.7.0-68 00:37:57.577 Recommended Arb Burst: 6 00:37:57.577 IEEE OUI Identifier: 00 00 00 00:37:57.577 Multi-path I/O 00:37:57.577 May have multiple subsystem ports: Yes 00:37:57.577 May have multiple controllers: Yes 00:37:57.577 Associated with SR-IOV VF: No 00:37:57.577 Max Data Transfer Size: Unlimited 00:37:57.577 Max Number of Namespaces: 1024 00:37:57.577 Max Number of I/O Queues: 128 00:37:57.577 NVMe Specification Version (VS): 1.3 00:37:57.577 NVMe Specification Version (Identify): 1.3 00:37:57.577 Maximum Queue Entries: 1024 00:37:57.577 Contiguous Queues Required: No 00:37:57.577 Arbitration Mechanisms Supported 00:37:57.577 Weighted Round Robin: Not Supported 00:37:57.577 Vendor Specific: Not Supported 00:37:57.577 Reset Timeout: 7500 ms 00:37:57.577 Doorbell Stride: 4 bytes 00:37:57.577 NVM Subsystem Reset: Not Supported 00:37:57.577 Command Sets Supported 00:37:57.577 NVM Command Set: Supported 00:37:57.577 Boot Partition: Not Supported 00:37:57.577 Memory Page Size Minimum: 4096 bytes 00:37:57.577 Memory Page Size Maximum: 4096 bytes 00:37:57.577 Persistent Memory Region: Not Supported 00:37:57.577 Optional Asynchronous Events Supported 00:37:57.577 Namespace Attribute Notices: Supported 00:37:57.577 Firmware Activation Notices: Not Supported 00:37:57.577 ANA Change Notices: Supported 00:37:57.577 PLE Aggregate Log Change Notices: Not Supported 00:37:57.577 LBA Status Info Alert Notices: Not Supported 00:37:57.577 EGE Aggregate Log Change Notices: Not Supported 00:37:57.577 Normal NVM Subsystem Shutdown event: Not Supported 00:37:57.577 Zone Descriptor Change Notices: Not Supported 00:37:57.577 Discovery Log Change Notices: Not Supported 00:37:57.577 Controller Attributes 00:37:57.577 128-bit Host Identifier: Supported 00:37:57.578 Non-Operational Permissive Mode: Not Supported 00:37:57.578 NVM Sets: Not Supported 00:37:57.578 Read Recovery Levels: Not Supported 00:37:57.578 Endurance Groups: Not Supported 00:37:57.578 Predictable Latency Mode: Not Supported 00:37:57.578 Traffic Based Keep ALive: Supported 00:37:57.578 Namespace Granularity: Not Supported 00:37:57.578 SQ Associations: Not Supported 00:37:57.578 UUID List: Not Supported 00:37:57.578 Multi-Domain Subsystem: Not Supported 00:37:57.578 Fixed Capacity Management: Not Supported 00:37:57.578 Variable Capacity Management: Not Supported 00:37:57.578 Delete Endurance Group: Not Supported 00:37:57.578 Delete NVM Set: Not Supported 00:37:57.578 Extended LBA Formats Supported: Not Supported 00:37:57.578 Flexible Data Placement Supported: Not Supported 00:37:57.578 00:37:57.578 Controller Memory Buffer Support 00:37:57.578 ================================ 00:37:57.578 Supported: No 00:37:57.578 00:37:57.578 Persistent Memory Region Support 00:37:57.578 ================================ 00:37:57.578 Supported: No 00:37:57.578 00:37:57.578 Admin Command Set Attributes 00:37:57.578 ============================ 00:37:57.578 Security Send/Receive: Not Supported 00:37:57.578 Format NVM: Not Supported 00:37:57.578 Firmware Activate/Download: Not Supported 00:37:57.578 Namespace Management: Not Supported 00:37:57.578 Device Self-Test: Not Supported 00:37:57.578 Directives: Not Supported 00:37:57.578 NVMe-MI: Not Supported 00:37:57.578 Virtualization Management: Not Supported 00:37:57.578 Doorbell Buffer Config: Not Supported 00:37:57.578 Get LBA Status Capability: Not Supported 00:37:57.578 Command & Feature Lockdown Capability: Not Supported 00:37:57.578 Abort Command Limit: 4 00:37:57.578 Async Event Request Limit: 4 00:37:57.578 Number of Firmware Slots: N/A 00:37:57.578 Firmware Slot 1 Read-Only: N/A 00:37:57.578 Firmware Activation Without Reset: N/A 00:37:57.578 Multiple Update Detection Support: N/A 00:37:57.578 Firmware Update Granularity: No Information Provided 00:37:57.578 Per-Namespace SMART Log: Yes 00:37:57.578 Asymmetric Namespace Access Log Page: Supported 00:37:57.578 ANA Transition Time : 10 sec 00:37:57.578 00:37:57.578 Asymmetric Namespace Access Capabilities 00:37:57.578 ANA Optimized State : Supported 00:37:57.578 ANA Non-Optimized State : Supported 00:37:57.578 ANA Inaccessible State : Supported 00:37:57.578 ANA Persistent Loss State : Supported 00:37:57.578 ANA Change State : Supported 00:37:57.578 ANAGRPID is not changed : No 00:37:57.578 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:37:57.578 00:37:57.578 ANA Group Identifier Maximum : 128 00:37:57.578 Number of ANA Group Identifiers : 128 00:37:57.578 Max Number of Allowed Namespaces : 1024 00:37:57.578 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:37:57.578 Command Effects Log Page: Supported 00:37:57.578 Get Log Page Extended Data: Supported 00:37:57.578 Telemetry Log Pages: Not Supported 00:37:57.578 Persistent Event Log Pages: Not Supported 00:37:57.578 Supported Log Pages Log Page: May Support 00:37:57.578 Commands Supported & Effects Log Page: Not Supported 00:37:57.578 Feature Identifiers & Effects Log Page:May Support 00:37:57.578 NVMe-MI Commands & Effects Log Page: May Support 00:37:57.578 Data Area 4 for Telemetry Log: Not Supported 00:37:57.578 Error Log Page Entries Supported: 128 00:37:57.578 Keep Alive: Supported 00:37:57.578 Keep Alive Granularity: 1000 ms 00:37:57.578 00:37:57.578 NVM Command Set Attributes 00:37:57.578 ========================== 00:37:57.578 Submission Queue Entry Size 00:37:57.578 Max: 64 00:37:57.578 Min: 64 00:37:57.578 Completion Queue Entry Size 00:37:57.578 Max: 16 00:37:57.578 Min: 16 00:37:57.578 Number of Namespaces: 1024 00:37:57.578 Compare Command: Not Supported 00:37:57.578 Write Uncorrectable Command: Not Supported 00:37:57.578 Dataset Management Command: Supported 00:37:57.578 Write Zeroes Command: Supported 00:37:57.578 Set Features Save Field: Not Supported 00:37:57.578 Reservations: Not Supported 00:37:57.578 Timestamp: Not Supported 00:37:57.578 Copy: Not Supported 00:37:57.578 Volatile Write Cache: Present 00:37:57.578 Atomic Write Unit (Normal): 1 00:37:57.578 Atomic Write Unit (PFail): 1 00:37:57.578 Atomic Compare & Write Unit: 1 00:37:57.578 Fused Compare & Write: Not Supported 00:37:57.578 Scatter-Gather List 00:37:57.578 SGL Command Set: Supported 00:37:57.578 SGL Keyed: Not Supported 00:37:57.578 SGL Bit Bucket Descriptor: Not Supported 00:37:57.578 SGL Metadata Pointer: Not Supported 00:37:57.578 Oversized SGL: Not Supported 00:37:57.578 SGL Metadata Address: Not Supported 00:37:57.578 SGL Offset: Supported 00:37:57.578 Transport SGL Data Block: Not Supported 00:37:57.578 Replay Protected Memory Block: Not Supported 00:37:57.578 00:37:57.578 Firmware Slot Information 00:37:57.578 ========================= 00:37:57.578 Active slot: 0 00:37:57.578 00:37:57.578 Asymmetric Namespace Access 00:37:57.578 =========================== 00:37:57.578 Change Count : 0 00:37:57.578 Number of ANA Group Descriptors : 1 00:37:57.578 ANA Group Descriptor : 0 00:37:57.578 ANA Group ID : 1 00:37:57.578 Number of NSID Values : 1 00:37:57.578 Change Count : 0 00:37:57.578 ANA State : 1 00:37:57.578 Namespace Identifier : 1 00:37:57.578 00:37:57.578 Commands Supported and Effects 00:37:57.578 ============================== 00:37:57.578 Admin Commands 00:37:57.578 -------------- 00:37:57.578 Get Log Page (02h): Supported 00:37:57.578 Identify (06h): Supported 00:37:57.578 Abort (08h): Supported 00:37:57.578 Set Features (09h): Supported 00:37:57.578 Get Features (0Ah): Supported 00:37:57.578 Asynchronous Event Request (0Ch): Supported 00:37:57.578 Keep Alive (18h): Supported 00:37:57.578 I/O Commands 00:37:57.578 ------------ 00:37:57.578 Flush (00h): Supported 00:37:57.578 Write (01h): Supported LBA-Change 00:37:57.578 Read (02h): Supported 00:37:57.578 Write Zeroes (08h): Supported LBA-Change 00:37:57.578 Dataset Management (09h): Supported 00:37:57.578 00:37:57.578 Error Log 00:37:57.578 ========= 00:37:57.578 Entry: 0 00:37:57.578 Error Count: 0x3 00:37:57.578 Submission Queue Id: 0x0 00:37:57.578 Command Id: 0x5 00:37:57.578 Phase Bit: 0 00:37:57.578 Status Code: 0x2 00:37:57.578 Status Code Type: 0x0 00:37:57.578 Do Not Retry: 1 00:37:57.578 Error Location: 0x28 00:37:57.578 LBA: 0x0 00:37:57.578 Namespace: 0x0 00:37:57.578 Vendor Log Page: 0x0 00:37:57.578 ----------- 00:37:57.578 Entry: 1 00:37:57.578 Error Count: 0x2 00:37:57.578 Submission Queue Id: 0x0 00:37:57.578 Command Id: 0x5 00:37:57.578 Phase Bit: 0 00:37:57.578 Status Code: 0x2 00:37:57.578 Status Code Type: 0x0 00:37:57.578 Do Not Retry: 1 00:37:57.578 Error Location: 0x28 00:37:57.578 LBA: 0x0 00:37:57.578 Namespace: 0x0 00:37:57.578 Vendor Log Page: 0x0 00:37:57.578 ----------- 00:37:57.578 Entry: 2 00:37:57.578 Error Count: 0x1 00:37:57.578 Submission Queue Id: 0x0 00:37:57.578 Command Id: 0x4 00:37:57.578 Phase Bit: 0 00:37:57.578 Status Code: 0x2 00:37:57.578 Status Code Type: 0x0 00:37:57.578 Do Not Retry: 1 00:37:57.578 Error Location: 0x28 00:37:57.578 LBA: 0x0 00:37:57.578 Namespace: 0x0 00:37:57.578 Vendor Log Page: 0x0 00:37:57.578 00:37:57.578 Number of Queues 00:37:57.578 ================ 00:37:57.578 Number of I/O Submission Queues: 128 00:37:57.578 Number of I/O Completion Queues: 128 00:37:57.578 00:37:57.578 ZNS Specific Controller Data 00:37:57.578 ============================ 00:37:57.578 Zone Append Size Limit: 0 00:37:57.578 00:37:57.578 00:37:57.578 Active Namespaces 00:37:57.578 ================= 00:37:57.578 get_feature(0x05) failed 00:37:57.578 Namespace ID:1 00:37:57.578 Command Set Identifier: NVM (00h) 00:37:57.578 Deallocate: Supported 00:37:57.578 Deallocated/Unwritten Error: Not Supported 00:37:57.578 Deallocated Read Value: Unknown 00:37:57.578 Deallocate in Write Zeroes: Not Supported 00:37:57.578 Deallocated Guard Field: 0xFFFF 00:37:57.578 Flush: Supported 00:37:57.578 Reservation: Not Supported 00:37:57.578 Namespace Sharing Capabilities: Multiple Controllers 00:37:57.578 Size (in LBAs): 3125627568 (1490GiB) 00:37:57.578 Capacity (in LBAs): 3125627568 (1490GiB) 00:37:57.578 Utilization (in LBAs): 3125627568 (1490GiB) 00:37:57.578 UUID: 5d7530ca-36f4-4600-8d21-004b5517075e 00:37:57.578 Thin Provisioning: Not Supported 00:37:57.579 Per-NS Atomic Units: Yes 00:37:57.579 Atomic Boundary Size (Normal): 0 00:37:57.579 Atomic Boundary Size (PFail): 0 00:37:57.579 Atomic Boundary Offset: 0 00:37:57.579 NGUID/EUI64 Never Reused: No 00:37:57.579 ANA group ID: 1 00:37:57.579 Namespace Write Protected: No 00:37:57.579 Number of LBA Formats: 1 00:37:57.579 Current LBA Format: LBA Format #00 00:37:57.579 LBA Format #00: Data Size: 512 Metadata Size: 0 00:37:57.579 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:57.579 rmmod nvme_tcp 00:37:57.579 rmmod nvme_fabrics 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:57.579 11:46:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.486 11:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:59.486 11:46:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:37:59.486 11:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:59.486 11:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:37:59.745 11:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:59.745 11:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:59.745 11:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:59.745 11:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:59.745 11:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:59.745 11:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:59.745 11:46:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:03.940 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:03.940 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:05.320 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:38:05.320 00:38:05.320 real 0m22.591s 00:38:05.320 user 0m5.507s 00:38:05.320 sys 0m12.672s 00:38:05.320 11:46:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:05.320 11:46:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:38:05.320 ************************************ 00:38:05.320 END TEST nvmf_identify_kernel_target 00:38:05.320 ************************************ 00:38:05.580 11:46:30 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:38:05.580 11:46:30 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:05.580 11:46:30 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:05.580 11:46:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:05.580 ************************************ 00:38:05.580 START TEST nvmf_auth_host 00:38:05.580 ************************************ 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:38:05.580 * Looking for test storage... 00:38:05.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:38:05.580 11:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:15.567 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:15.567 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:15.567 Found net devices under 0000:af:00.0: cvl_0_0 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:15.567 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:15.568 Found net devices under 0000:af:00.1: cvl_0_1 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:15.568 11:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:15.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:15.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:38:15.568 00:38:15.568 --- 10.0.0.2 ping statistics --- 00:38:15.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:15.568 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:15.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:15.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:38:15.568 00:38:15.568 --- 10.0.0.1 ping statistics --- 00:38:15.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:15.568 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=4141960 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 4141960 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 4141960 ']' 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:15.568 11:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9edca61c940de08c804cf6a19971748f 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.64M 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9edca61c940de08c804cf6a19971748f 0 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9edca61c940de08c804cf6a19971748f 0 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9edca61c940de08c804cf6a19971748f 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.64M 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.64M 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.64M 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c1b09f984dbbb540fddbb701df18997fd8908f009fd3af7e60ef56035eacadad 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.L4V 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c1b09f984dbbb540fddbb701df18997fd8908f009fd3af7e60ef56035eacadad 3 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c1b09f984dbbb540fddbb701df18997fd8908f009fd3af7e60ef56035eacadad 3 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c1b09f984dbbb540fddbb701df18997fd8908f009fd3af7e60ef56035eacadad 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.L4V 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.L4V 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.L4V 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=70b00d07060fe4dc660bba01a5de444a6967f703cc469bb1 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pTY 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 70b00d07060fe4dc660bba01a5de444a6967f703cc469bb1 0 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 70b00d07060fe4dc660bba01a5de444a6967f703cc469bb1 0 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:15.568 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=70b00d07060fe4dc660bba01a5de444a6967f703cc469bb1 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pTY 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pTY 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.pTY 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d83c28f7f8454587656fab8f712c92f068df19a7791e99ed 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3kf 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d83c28f7f8454587656fab8f712c92f068df19a7791e99ed 2 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d83c28f7f8454587656fab8f712c92f068df19a7791e99ed 2 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d83c28f7f8454587656fab8f712c92f068df19a7791e99ed 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3kf 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3kf 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.3kf 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9fd815b01b5736a4bd3673f7c2d5c17c 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.22K 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9fd815b01b5736a4bd3673f7c2d5c17c 1 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9fd815b01b5736a4bd3673f7c2d5c17c 1 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9fd815b01b5736a4bd3673f7c2d5c17c 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.22K 00:38:15.569 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.22K 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.22K 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=664fa86413786e8f72bd91f778da36df 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.syM 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 664fa86413786e8f72bd91f778da36df 1 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 664fa86413786e8f72bd91f778da36df 1 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=664fa86413786e8f72bd91f778da36df 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.syM 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.syM 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.syM 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f0253bf6a8547c0c57fc632d8a465a4f4fc76a9f02c812fb 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wAB 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f0253bf6a8547c0c57fc632d8a465a4f4fc76a9f02c812fb 2 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f0253bf6a8547c0c57fc632d8a465a4f4fc76a9f02c812fb 2 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f0253bf6a8547c0c57fc632d8a465a4f4fc76a9f02c812fb 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wAB 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wAB 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.wAB 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6c86e2765cd957ae28421640b762ae1e 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wIv 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6c86e2765cd957ae28421640b762ae1e 0 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6c86e2765cd957ae28421640b762ae1e 0 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6c86e2765cd957ae28421640b762ae1e 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wIv 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wIv 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.wIv 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=16ade199a0992de90a42d313fb47dccd3ff2065904b94ec288dcacf00a1d5721 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.fki 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 16ade199a0992de90a42d313fb47dccd3ff2065904b94ec288dcacf00a1d5721 3 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 16ade199a0992de90a42d313fb47dccd3ff2065904b94ec288dcacf00a1d5721 3 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=16ade199a0992de90a42d313fb47dccd3ff2065904b94ec288dcacf00a1d5721 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:38:15.829 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:38:16.089 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.fki 00:38:16.089 11:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.fki 00:38:16.089 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.fki 00:38:16.089 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:38:16.089 11:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4141960 00:38:16.089 11:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 4141960 ']' 00:38:16.089 11:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:16.089 11:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:16.089 11:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:16.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:16.089 11:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:16.089 11:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.64M 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.L4V ]] 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.L4V 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.pTY 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.3kf ]] 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3kf 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.22K 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.syM ]] 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.syM 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.wAB 00:38:16.348 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.wIv ]] 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.wIv 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.fki 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:16.349 11:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:20.540 Waiting for block devices as requested 00:38:20.541 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:20.541 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:20.541 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:20.541 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:20.541 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:20.799 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:20.799 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:20.799 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:21.058 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:21.058 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:21.058 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:21.317 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:21.317 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:21.317 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:21.576 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:21.576 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:21.576 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:22.512 No valid GPT data, bailing 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:38:22.512 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:22.513 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:38:22.513 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:38:22.513 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:38:22.513 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:38:22.513 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:38:22.513 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:38:22.513 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:38:22.513 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:38:22.513 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:22.513 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:38:22.771 00:38:22.771 Discovery Log Number of Records 2, Generation counter 2 00:38:22.771 =====Discovery Log Entry 0====== 00:38:22.771 trtype: tcp 00:38:22.771 adrfam: ipv4 00:38:22.771 subtype: current discovery subsystem 00:38:22.771 treq: not specified, sq flow control disable supported 00:38:22.771 portid: 1 00:38:22.771 trsvcid: 4420 00:38:22.771 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:22.771 traddr: 10.0.0.1 00:38:22.771 eflags: none 00:38:22.771 sectype: none 00:38:22.771 =====Discovery Log Entry 1====== 00:38:22.771 trtype: tcp 00:38:22.771 adrfam: ipv4 00:38:22.771 subtype: nvme subsystem 00:38:22.771 treq: not specified, sq flow control disable supported 00:38:22.771 portid: 1 00:38:22.771 trsvcid: 4420 00:38:22.771 subnqn: nqn.2024-02.io.spdk:cnode0 00:38:22.771 traddr: 10.0.0.1 00:38:22.771 eflags: none 00:38:22.771 sectype: none 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:22.771 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.772 nvme0n1 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:22.772 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.030 11:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.030 nvme0n1 00:38:23.030 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.030 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:23.030 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.030 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:23.030 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.030 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.030 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:23.030 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:23.030 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.030 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.030 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.030 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:23.030 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:23.031 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.290 nvme0n1 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.290 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.550 nvme0n1 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.550 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.809 nvme0n1 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:23.809 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:23.810 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.068 nvme0n1 00:38:24.068 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.068 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:24.069 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:24.069 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.069 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.069 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.069 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:24.069 11:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:24.069 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.069 11:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.069 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.328 nvme0n1 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:24.328 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:24.329 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:24.329 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.329 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.587 nvme0n1 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.587 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.846 nvme0n1 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:24.846 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.105 nvme0n1 00:38:25.105 11:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.105 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:25.105 11:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.105 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.364 nvme0n1 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.364 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.622 nvme0n1 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.623 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.881 nvme0n1 00:38:25.881 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.881 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:25.881 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:25.881 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.881 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:25.881 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:25.881 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:25.881 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:25.881 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:25.881 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.141 11:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.141 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:26.141 11:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.141 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.400 nvme0n1 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:26.400 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:26.401 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:26.401 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:26.401 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:26.401 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:26.401 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:26.401 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:26.401 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:26.401 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.401 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.660 nvme0n1 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.660 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.919 nvme0n1 00:38:26.919 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.919 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:26.919 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.919 11:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:26.919 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.919 11:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:26.919 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:27.178 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.437 nvme0n1 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:27.437 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.004 nvme0n1 00:38:28.004 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.004 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:28.004 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.004 11:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:28.004 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.004 11:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.004 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.571 nvme0n1 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:28.571 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.138 nvme0n1 00:38:29.138 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.138 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:29.138 11:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:29.138 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.138 11:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:29.138 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:29.139 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:29.139 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:29.139 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:29.139 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:29.139 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:29.139 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:29.139 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:29.139 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:29.139 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:29.139 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:29.139 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.139 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.398 nvme0n1 00:38:29.398 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.398 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:29.398 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.398 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:29.398 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.398 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.657 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:29.657 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:29.657 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.657 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.657 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.657 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:29.657 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:29.657 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:38:29.657 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:29.657 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:29.658 11:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.226 nvme0n1 00:38:30.226 11:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.226 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:30.226 11:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.226 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:30.226 11:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.226 11:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.226 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:30.226 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:30.226 11:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.226 11:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.485 11:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.485 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:30.485 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:38:30.485 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:30.485 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:30.485 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:30.485 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:30.485 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:30.486 11:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.154 nvme0n1 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:31.154 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:32.091 nvme0n1 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:32.091 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.092 11:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:32.660 nvme0n1 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.660 11:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.598 nvme0n1 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.598 nvme0n1 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:33.598 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:33.857 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.858 nvme0n1 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:33.858 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.117 11:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.117 nvme0n1 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:34.117 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:34.118 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:34.118 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:34.118 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:34.118 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:34.118 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:34.118 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:34.118 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:34.118 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:34.118 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.377 nvme0n1 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.377 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.637 nvme0n1 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.637 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.897 nvme0n1 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:34.897 11:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.157 nvme0n1 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.157 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.417 nvme0n1 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:35.417 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.418 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.677 nvme0n1 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.677 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.937 nvme0n1 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.937 11:47:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.197 nvme0n1 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:36.197 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:36.457 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.716 nvme0n1 00:38:36.716 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:36.716 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:36.716 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:36.716 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:36.716 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.716 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:36.716 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:36.717 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.976 nvme0n1 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:36.976 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:36.977 11:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:37.235 nvme0n1 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:37.235 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:37.494 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:37.494 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:37.494 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:37.494 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:37.494 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:37.494 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:37.494 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:37.494 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:37.494 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:37.494 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:37.494 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:37.494 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:37.753 nvme0n1 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:37.753 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:37.754 11:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:38.322 nvme0n1 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:38.322 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:38.323 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:38.323 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:38.323 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:38.581 nvme0n1 00:38:38.581 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:38.581 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:38.581 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:38.581 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:38.581 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:38.581 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:38.841 11:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.101 nvme0n1 00:38:39.101 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.101 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:39.101 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:39.101 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.101 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.101 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.101 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:39.101 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:39.101 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.101 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.360 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.620 nvme0n1 00:38:39.620 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.620 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:39.620 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:39.620 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.620 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.620 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.620 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:39.620 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:39.620 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.620 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:39.880 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:39.881 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:39.881 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:39.881 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:39.881 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:39.881 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:39.881 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:39.881 11:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:39.881 11:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:39.881 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.881 11:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:40.140 nvme0n1 00:38:40.140 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:40.140 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:40.140 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:40.140 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:40.140 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:40.140 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:40.140 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:40.140 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:40.140 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:40.140 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:40.399 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:40.966 nvme0n1 00:38:40.966 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:40.966 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:40.966 11:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:40.966 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:40.966 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:40.966 11:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:40.966 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:40.967 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:41.904 nvme0n1 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:41.904 11:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:42.473 nvme0n1 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:42.473 11:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:42.474 11:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:42.474 11:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:42.474 11:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:42.474 11:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:42.474 11:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:42.474 11:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:42.474 11:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:42.474 11:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:42.474 11:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:42.474 11:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:42.474 11:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.411 nvme0n1 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:43.411 11:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.980 nvme0n1 00:38:43.980 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:43.980 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:43.980 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:43.980 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:43.980 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:44.239 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.240 nvme0n1 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.240 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.499 nvme0n1 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.499 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:44.500 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.759 nvme0n1 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:44.759 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:44.760 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:44.760 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:44.760 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:44.760 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:44.760 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:44.760 11:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:44.760 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:44.760 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:44.760 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.019 nvme0n1 00:38:45.019 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.019 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:45.019 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.019 11:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:45.019 11:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.019 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.279 nvme0n1 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.279 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.538 nvme0n1 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:45.538 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.539 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.798 nvme0n1 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:45.798 11:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.058 nvme0n1 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.058 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.318 nvme0n1 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.318 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.578 nvme0n1 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.578 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.838 nvme0n1 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:46.838 11:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.097 nvme0n1 00:38:47.097 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:47.097 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:47.097 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:47.097 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:47.097 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:47.356 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:47.357 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.616 nvme0n1 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:47.616 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:47.617 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:47.617 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:47.617 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:47.617 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:47.617 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:47.617 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:47.617 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:47.617 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:47.617 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.876 nvme0n1 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:47.876 11:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.135 nvme0n1 00:38:48.135 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:48.135 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:48.135 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:48.135 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:48.135 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.135 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:48.395 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.654 nvme0n1 00:38:48.654 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:48.654 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:48.654 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:48.654 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:48.654 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.654 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:48.913 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:48.914 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:48.914 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:48.914 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:48.914 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:48.914 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:48.914 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:48.914 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:48.914 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:48.914 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:48.914 11:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:48.914 11:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:48.914 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:48.914 11:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.172 nvme0n1 00:38:49.172 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.172 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:49.172 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.172 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:49.172 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.172 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.431 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.432 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.690 nvme0n1 00:38:49.690 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.690 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:49.690 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.690 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:49.690 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.690 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:49.949 11:47:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:49.950 11:47:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:49.950 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:49.950 11:47:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:50.209 nvme0n1 00:38:50.209 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:50.209 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:50.209 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:50.209 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:50.209 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:50.209 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:50.468 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:50.727 nvme0n1 00:38:50.727 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:50.727 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:50.727 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:50.727 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:50.727 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:50.727 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWVkY2E2MWM5NDBkZTA4YzgwNGNmNmExOTk3MTc0OGYJBgeA: 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: ]] 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzFiMDlmOTg0ZGJiYjU0MGZkZGJiNzAxZGYxODk5N2ZkODkwOGYwMDlmZDNhZjdlNjBlZjU2MDM1ZWFjYWRhZIDx0A4=: 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:50.986 11:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:51.555 nvme0n1 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:51.555 11:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:51.814 11:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:51.814 11:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:51.814 11:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:51.814 11:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:51.814 11:47:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:51.814 11:47:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:51.814 11:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:51.814 11:47:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.382 nvme0n1 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkODE1YjAxYjU3MzZhNGJkMzY3M2Y3YzJkNWMxN2MsQL+b: 00:38:52.382 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: ]] 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjY0ZmE4NjQxMzc4NmU4ZjcyYmQ5MWY3NzhkYTM2ZGZOSryw: 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:52.383 11:47:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:53.320 nvme0n1 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjAyNTNiZjZhODU0N2MwYzU3ZmM2MzJkOGE0NjVhNGY0ZmM3NmE5ZjAyYzgxMmZim2e8Lw==: 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: ]] 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmM4NmUyNzY1Y2Q5NTdhZTI4NDIxNjQwYjc2MmFlMWW+Glo4: 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:53.320 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:53.889 nvme0n1 00:38:53.889 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:53.889 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:53.889 11:47:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:53.889 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:53.889 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:53.889 11:47:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTZhZGUxOTlhMDk5MmRlOTBhNDJkMzEzZmI0N2RjY2QzZmYyMDY1OTA0Yjk0ZWMyODhkY2FjZjAwYTFkNTcyMX/Z9D0=: 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.147 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.792 nvme0n1 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzBiMDBkMDcwNjBmZTRkYzY2MGJiYTAxYTVkZTQ0NGE2OTY3ZjcwM2NjNDY5YmIxhnhbvw==: 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: ]] 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgzYzI4ZjdmODQ1NDU4NzY1NmZhYjhmNzEyYzkyZjA2OGRmMTlhNzc5MWU5OWVkk7JFOg==: 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.792 request: 00:38:54.792 { 00:38:54.792 "name": "nvme0", 00:38:54.792 "trtype": "tcp", 00:38:54.792 "traddr": "10.0.0.1", 00:38:54.792 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:54.792 "adrfam": "ipv4", 00:38:54.792 "trsvcid": "4420", 00:38:54.792 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:54.792 "method": "bdev_nvme_attach_controller", 00:38:54.792 "req_id": 1 00:38:54.792 } 00:38:54.792 Got JSON-RPC error response 00:38:54.792 response: 00:38:54.792 { 00:38:54.792 "code": -5, 00:38:54.792 "message": "Input/output error" 00:38:54.792 } 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.792 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.052 request: 00:38:55.052 { 00:38:55.052 "name": "nvme0", 00:38:55.052 "trtype": "tcp", 00:38:55.052 "traddr": "10.0.0.1", 00:38:55.052 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:55.052 "adrfam": "ipv4", 00:38:55.052 "trsvcid": "4420", 00:38:55.052 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:55.052 "dhchap_key": "key2", 00:38:55.052 "method": "bdev_nvme_attach_controller", 00:38:55.052 "req_id": 1 00:38:55.052 } 00:38:55.052 Got JSON-RPC error response 00:38:55.052 response: 00:38:55.052 { 00:38:55.052 "code": -5, 00:38:55.052 "message": "Input/output error" 00:38:55.052 } 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:38:55.052 11:47:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.052 request: 00:38:55.052 { 00:38:55.052 "name": "nvme0", 00:38:55.052 "trtype": "tcp", 00:38:55.052 "traddr": "10.0.0.1", 00:38:55.052 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:38:55.052 "adrfam": "ipv4", 00:38:55.052 "trsvcid": "4420", 00:38:55.052 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:38:55.052 "dhchap_key": "key1", 00:38:55.052 "dhchap_ctrlr_key": "ckey2", 00:38:55.052 "method": "bdev_nvme_attach_controller", 00:38:55.052 "req_id": 1 00:38:55.052 } 00:38:55.052 Got JSON-RPC error response 00:38:55.052 response: 00:38:55.052 { 00:38:55.052 "code": -5, 00:38:55.052 "message": "Input/output error" 00:38:55.052 } 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:55.052 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:55.052 rmmod nvme_tcp 00:38:55.052 rmmod nvme_fabrics 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 4141960 ']' 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 4141960 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 4141960 ']' 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 4141960 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4141960 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4141960' 00:38:55.312 killing process with pid 4141960 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 4141960 00:38:55.312 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 4141960 00:38:55.571 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:55.571 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:55.571 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:55.571 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:55.571 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:55.571 11:47:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:55.572 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:55.572 11:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:57.476 11:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:57.476 11:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:38:57.476 11:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:38:57.476 11:47:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:38:57.476 11:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:38:57.476 11:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:38:57.476 11:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:57.476 11:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:38:57.477 11:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:57.477 11:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:57.477 11:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:38:57.477 11:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:38:57.735 11:47:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:01.930 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:39:01.930 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:39:03.308 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:39:03.308 11:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.64M /tmp/spdk.key-null.pTY /tmp/spdk.key-sha256.22K /tmp/spdk.key-sha384.wAB /tmp/spdk.key-sha512.fki /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:39:03.308 11:47:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:07.503 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:39:07.503 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:39:07.503 00:39:07.503 real 1m1.988s 00:39:07.503 user 0m52.348s 00:39:07.503 sys 0m18.627s 00:39:07.503 11:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:07.503 11:47:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.503 ************************************ 00:39:07.503 END TEST nvmf_auth_host 00:39:07.503 ************************************ 00:39:07.503 11:47:32 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:39:07.503 11:47:32 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:39:07.503 11:47:32 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:39:07.503 11:47:32 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:07.503 11:47:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:07.503 ************************************ 00:39:07.503 START TEST nvmf_digest 00:39:07.503 ************************************ 00:39:07.503 11:47:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:39:07.762 * Looking for test storage... 00:39:07.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:07.762 11:47:32 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:07.762 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:39:07.762 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:07.762 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:07.762 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:07.762 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:07.762 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:07.762 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:39:07.763 11:47:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:15.889 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:15.889 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:39:15.889 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:15.889 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:15.889 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:15.889 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:15.889 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:15.889 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:39:15.889 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:15.889 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:16.149 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:16.149 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:16.149 11:47:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:16.149 Found net devices under 0000:af:00.0: cvl_0_0 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:16.149 Found net devices under 0000:af:00.1: cvl_0_1 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:16.149 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:16.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:16.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:39:16.410 00:39:16.410 --- 10.0.0.2 ping statistics --- 00:39:16.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:16.410 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:16.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:16.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:39:16.410 00:39:16.410 --- 10.0.0.1 ping statistics --- 00:39:16.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:16.410 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:16.410 ************************************ 00:39:16.410 START TEST nvmf_digest_clean 00:39:16.410 ************************************ 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=4157918 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 4157918 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 4157918 ']' 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:16.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:16.410 11:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:16.410 [2024-06-10 11:47:41.436624] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:39:16.410 [2024-06-10 11:47:41.436687] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:16.410 EAL: No free 2048 kB hugepages reported on node 1 00:39:16.670 [2024-06-10 11:47:41.566886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.670 [2024-06-10 11:47:41.647108] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:16.670 [2024-06-10 11:47:41.647157] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:16.670 [2024-06-10 11:47:41.647171] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:16.670 [2024-06-10 11:47:41.647183] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:16.670 [2024-06-10 11:47:41.647193] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:16.670 [2024-06-10 11:47:41.647227] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:17.239 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:17.239 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:39:17.239 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:17.239 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:17.239 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:17.498 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:17.498 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:39:17.498 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:39:17.498 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:39:17.498 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:17.498 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:17.498 null0 00:39:17.498 [2024-06-10 11:47:42.475175] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:17.498 [2024-06-10 11:47:42.499401] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:17.498 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4158195 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4158195 /var/tmp/bperf.sock 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 4158195 ']' 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:17.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:17.499 11:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:17.499 [2024-06-10 11:47:42.555942] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:39:17.499 [2024-06-10 11:47:42.555998] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4158195 ] 00:39:17.759 EAL: No free 2048 kB hugepages reported on node 1 00:39:17.759 [2024-06-10 11:47:42.666378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.759 [2024-06-10 11:47:42.753347] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:18.698 11:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:18.698 11:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:39:18.698 11:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:39:18.698 11:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:39:18.698 11:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:18.698 11:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:18.698 11:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:18.957 nvme0n1 00:39:18.957 11:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:18.957 11:47:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:19.217 Running I/O for 2 seconds... 00:39:21.123 00:39:21.123 Latency(us) 00:39:21.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:21.123 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:21.123 nvme0n1 : 2.00 20074.17 78.41 0.00 0.00 6368.88 2949.12 16357.79 00:39:21.123 =================================================================================================================== 00:39:21.123 Total : 20074.17 78.41 0.00 0.00 6368.88 2949.12 16357.79 00:39:21.123 0 00:39:21.123 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:21.123 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:21.123 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:21.123 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:21.123 | select(.opcode=="crc32c") 00:39:21.123 | "\(.module_name) \(.executed)"' 00:39:21.123 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4158195 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 4158195 ']' 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 4158195 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4158195 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4158195' 00:39:21.383 killing process with pid 4158195 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 4158195 00:39:21.383 Received shutdown signal, test time was about 2.000000 seconds 00:39:21.383 00:39:21.383 Latency(us) 00:39:21.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:21.383 =================================================================================================================== 00:39:21.383 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:21.383 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 4158195 00:39:21.642 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:39:21.642 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:39:21.642 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:39:21.642 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:39:21.643 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:39:21.643 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:39:21.643 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:39:21.643 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4158873 00:39:21.643 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4158873 /var/tmp/bperf.sock 00:39:21.643 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:39:21.643 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 4158873 ']' 00:39:21.643 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:21.643 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:21.643 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:21.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:21.643 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:21.643 11:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:21.643 [2024-06-10 11:47:46.652558] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:39:21.643 [2024-06-10 11:47:46.652632] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4158873 ] 00:39:21.643 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:21.643 Zero copy mechanism will not be used. 00:39:21.643 EAL: No free 2048 kB hugepages reported on node 1 00:39:21.902 [2024-06-10 11:47:46.764123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:21.902 [2024-06-10 11:47:46.851171] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:22.470 11:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:22.470 11:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:39:22.470 11:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:39:22.470 11:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:39:22.470 11:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:23.038 11:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:23.038 11:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:23.297 nvme0n1 00:39:23.298 11:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:23.298 11:47:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:23.298 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:23.298 Zero copy mechanism will not be used. 00:39:23.298 Running I/O for 2 seconds... 00:39:25.204 00:39:25.204 Latency(us) 00:39:25.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:25.204 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:39:25.204 nvme0n1 : 2.00 3553.28 444.16 0.00 0.00 4499.47 3853.52 16043.21 00:39:25.204 =================================================================================================================== 00:39:25.205 Total : 3553.28 444.16 0.00 0.00 4499.47 3853.52 16043.21 00:39:25.205 0 00:39:25.205 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:25.205 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:25.464 | select(.opcode=="crc32c") 00:39:25.464 | "\(.module_name) \(.executed)"' 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4158873 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 4158873 ']' 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 4158873 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4158873 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4158873' 00:39:25.464 killing process with pid 4158873 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 4158873 00:39:25.464 Received shutdown signal, test time was about 2.000000 seconds 00:39:25.464 00:39:25.464 Latency(us) 00:39:25.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:25.464 =================================================================================================================== 00:39:25.464 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:25.464 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 4158873 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4159546 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4159546 /var/tmp/bperf.sock 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 4159546 ']' 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:25.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:25.724 11:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:25.724 [2024-06-10 11:47:50.781411] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:39:25.724 [2024-06-10 11:47:50.781460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4159546 ] 00:39:25.724 EAL: No free 2048 kB hugepages reported on node 1 00:39:25.983 [2024-06-10 11:47:50.875879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:25.983 [2024-06-10 11:47:50.959708] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:26.552 11:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:26.553 11:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:39:26.553 11:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:39:26.553 11:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:39:26.553 11:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:26.811 11:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:26.811 11:47:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:27.070 nvme0n1 00:39:27.070 11:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:27.070 11:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:27.329 Running I/O for 2 seconds... 00:39:29.235 00:39:29.235 Latency(us) 00:39:29.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:29.235 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:29.235 nvme0n1 : 2.00 20891.40 81.61 0.00 0.00 6117.01 2936.01 10590.62 00:39:29.235 =================================================================================================================== 00:39:29.235 Total : 20891.40 81.61 0.00 0.00 6117.01 2936.01 10590.62 00:39:29.235 0 00:39:29.235 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:29.235 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:29.235 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:29.235 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:29.235 | select(.opcode=="crc32c") 00:39:29.235 | "\(.module_name) \(.executed)"' 00:39:29.235 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4159546 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 4159546 ']' 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 4159546 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4159546 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4159546' 00:39:29.494 killing process with pid 4159546 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 4159546 00:39:29.494 Received shutdown signal, test time was about 2.000000 seconds 00:39:29.494 00:39:29.494 Latency(us) 00:39:29.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:29.494 =================================================================================================================== 00:39:29.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:29.494 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 4159546 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4160193 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4160193 /var/tmp/bperf.sock 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 4160193 ']' 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:29.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:29.754 11:47:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:29.754 [2024-06-10 11:47:54.728497] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:39:29.754 [2024-06-10 11:47:54.728564] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160193 ] 00:39:29.754 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:29.754 Zero copy mechanism will not be used. 00:39:29.754 EAL: No free 2048 kB hugepages reported on node 1 00:39:29.754 [2024-06-10 11:47:54.838419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:30.013 [2024-06-10 11:47:54.926428] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:30.651 11:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:30.651 11:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:39:30.651 11:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:39:30.651 11:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:39:30.651 11:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:30.934 11:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:30.934 11:47:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:31.501 nvme0n1 00:39:31.501 11:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:31.501 11:47:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:31.501 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:31.501 Zero copy mechanism will not be used. 00:39:31.501 Running I/O for 2 seconds... 00:39:34.038 00:39:34.038 Latency(us) 00:39:34.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:34.038 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:39:34.038 nvme0n1 : 2.00 3968.10 496.01 0.00 0.00 4025.44 2844.26 17825.79 00:39:34.038 =================================================================================================================== 00:39:34.038 Total : 3968.10 496.01 0.00 0.00 4025.44 2844.26 17825.79 00:39:34.038 0 00:39:34.038 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:34.038 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:34.038 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:34.038 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:34.038 | select(.opcode=="crc32c") 00:39:34.038 | "\(.module_name) \(.executed)"' 00:39:34.038 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:34.038 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:34.038 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:34.038 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:34.038 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:34.038 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4160193 00:39:34.038 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 4160193 ']' 00:39:34.039 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 4160193 00:39:34.039 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:39:34.039 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:34.039 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4160193 00:39:34.039 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:34.039 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:34.039 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4160193' 00:39:34.039 killing process with pid 4160193 00:39:34.039 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 4160193 00:39:34.039 Received shutdown signal, test time was about 2.000000 seconds 00:39:34.039 00:39:34.039 Latency(us) 00:39:34.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:34.039 =================================================================================================================== 00:39:34.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:34.039 11:47:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 4160193 00:39:34.039 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4157918 00:39:34.039 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 4157918 ']' 00:39:34.039 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 4157918 00:39:34.039 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:39:34.039 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:34.039 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4157918 00:39:34.039 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:34.039 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:34.039 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4157918' 00:39:34.039 killing process with pid 4157918 00:39:34.039 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 4157918 00:39:34.039 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 4157918 00:39:34.298 00:39:34.298 real 0m17.939s 00:39:34.298 user 0m34.689s 00:39:34.298 sys 0m4.999s 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:34.298 ************************************ 00:39:34.298 END TEST nvmf_digest_clean 00:39:34.298 ************************************ 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:34.298 ************************************ 00:39:34.298 START TEST nvmf_digest_error 00:39:34.298 ************************************ 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=4160956 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 4160956 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:39:34.298 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 4160956 ']' 00:39:34.557 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:34.557 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:34.557 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:34.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:34.557 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:34.557 11:47:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:34.557 [2024-06-10 11:47:59.453652] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:39:34.557 [2024-06-10 11:47:59.453708] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:34.557 EAL: No free 2048 kB hugepages reported on node 1 00:39:34.557 [2024-06-10 11:47:59.585422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:34.816 [2024-06-10 11:47:59.698165] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:34.816 [2024-06-10 11:47:59.698218] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:34.816 [2024-06-10 11:47:59.698237] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:34.816 [2024-06-10 11:47:59.698252] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:34.816 [2024-06-10 11:47:59.698265] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:34.816 [2024-06-10 11:47:59.698308] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:35.385 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:35.385 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:39:35.385 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:35.385 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:35.385 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:35.385 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:35.385 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:39:35.385 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.385 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:35.385 [2024-06-10 11:48:00.484766] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:35.645 null0 00:39:35.645 [2024-06-10 11:48:00.582190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:35.645 [2024-06-10 11:48:00.606421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4161258 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4161258 /var/tmp/bperf.sock 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 4161258 ']' 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:35.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:35.645 11:48:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:35.645 [2024-06-10 11:48:00.661034] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:39:35.645 [2024-06-10 11:48:00.661094] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4161258 ] 00:39:35.645 EAL: No free 2048 kB hugepages reported on node 1 00:39:35.905 [2024-06-10 11:48:00.771551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:35.905 [2024-06-10 11:48:00.858643] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:36.472 11:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:36.472 11:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:39:36.472 11:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:36.473 11:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:36.732 11:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:36.732 11:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.732 11:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:36.732 11:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.732 11:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:36.732 11:48:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:37.300 nvme0n1 00:39:37.300 11:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:39:37.300 11:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.300 11:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:37.300 11:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.300 11:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:37.300 11:48:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:37.300 Running I/O for 2 seconds... 00:39:37.300 [2024-06-10 11:48:02.266814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.300 [2024-06-10 11:48:02.266854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.300 [2024-06-10 11:48:02.266871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.300 [2024-06-10 11:48:02.279855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.300 [2024-06-10 11:48:02.279885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.300 [2024-06-10 11:48:02.279901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.300 [2024-06-10 11:48:02.293471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.300 [2024-06-10 11:48:02.293500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.300 [2024-06-10 11:48:02.293516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.300 [2024-06-10 11:48:02.304454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.300 [2024-06-10 11:48:02.304483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.300 [2024-06-10 11:48:02.304497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.300 [2024-06-10 11:48:02.319286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.300 [2024-06-10 11:48:02.319320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.300 [2024-06-10 11:48:02.319336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.300 [2024-06-10 11:48:02.333712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.300 [2024-06-10 11:48:02.333741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.300 [2024-06-10 11:48:02.333756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.300 [2024-06-10 11:48:02.344838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.300 [2024-06-10 11:48:02.344866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.300 [2024-06-10 11:48:02.344880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.300 [2024-06-10 11:48:02.359275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.300 [2024-06-10 11:48:02.359302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.300 [2024-06-10 11:48:02.359317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.300 [2024-06-10 11:48:02.370860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.300 [2024-06-10 11:48:02.370887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.300 [2024-06-10 11:48:02.370902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.300 [2024-06-10 11:48:02.384922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.300 [2024-06-10 11:48:02.384949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.300 [2024-06-10 11:48:02.384964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.300 [2024-06-10 11:48:02.398502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.300 [2024-06-10 11:48:02.398530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.300 [2024-06-10 11:48:02.398544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.409324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.409351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.409366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.423264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.423292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.423307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.438024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.438052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.438067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.449138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.449165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.449181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.464038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.464065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.464080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.477179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.477206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.477220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.489387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.489414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.489429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.502244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.502270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.502284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.514931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.514957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.514971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.528094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.528120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.528135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.540964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.540995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.541010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.554235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.554263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.554277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.565959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.565987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.566002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.579114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.579142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.579157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.592408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.592436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.592450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.604186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.560 [2024-06-10 11:48:02.604214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.560 [2024-06-10 11:48:02.604230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.560 [2024-06-10 11:48:02.618189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.561 [2024-06-10 11:48:02.618218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.561 [2024-06-10 11:48:02.618233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.561 [2024-06-10 11:48:02.631455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.561 [2024-06-10 11:48:02.631484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.561 [2024-06-10 11:48:02.631499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.561 [2024-06-10 11:48:02.642593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.561 [2024-06-10 11:48:02.642621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.561 [2024-06-10 11:48:02.642635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.561 [2024-06-10 11:48:02.656732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.561 [2024-06-10 11:48:02.656761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.561 [2024-06-10 11:48:02.656776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.820 [2024-06-10 11:48:02.669226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.820 [2024-06-10 11:48:02.669255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.820 [2024-06-10 11:48:02.669270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.820 [2024-06-10 11:48:02.681336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.820 [2024-06-10 11:48:02.681363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.820 [2024-06-10 11:48:02.681378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.820 [2024-06-10 11:48:02.695781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.820 [2024-06-10 11:48:02.695809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.820 [2024-06-10 11:48:02.695823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.820 [2024-06-10 11:48:02.706147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.820 [2024-06-10 11:48:02.706174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.820 [2024-06-10 11:48:02.706189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.820 [2024-06-10 11:48:02.720268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.820 [2024-06-10 11:48:02.720295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.820 [2024-06-10 11:48:02.720310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.820 [2024-06-10 11:48:02.733247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.820 [2024-06-10 11:48:02.733274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.820 [2024-06-10 11:48:02.733288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.820 [2024-06-10 11:48:02.745616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.820 [2024-06-10 11:48:02.745644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.820 [2024-06-10 11:48:02.745659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.820 [2024-06-10 11:48:02.759734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.820 [2024-06-10 11:48:02.759761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.820 [2024-06-10 11:48:02.759779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.821 [2024-06-10 11:48:02.770228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.821 [2024-06-10 11:48:02.770255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.821 [2024-06-10 11:48:02.770270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.821 [2024-06-10 11:48:02.785390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.821 [2024-06-10 11:48:02.785418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.821 [2024-06-10 11:48:02.785432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.821 [2024-06-10 11:48:02.798275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.821 [2024-06-10 11:48:02.798302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.821 [2024-06-10 11:48:02.798317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.821 [2024-06-10 11:48:02.810903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.821 [2024-06-10 11:48:02.810931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.821 [2024-06-10 11:48:02.810946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.821 [2024-06-10 11:48:02.824544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.821 [2024-06-10 11:48:02.824572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.821 [2024-06-10 11:48:02.824595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.821 [2024-06-10 11:48:02.835842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.821 [2024-06-10 11:48:02.835870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.821 [2024-06-10 11:48:02.835884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.821 [2024-06-10 11:48:02.849968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.821 [2024-06-10 11:48:02.849996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.821 [2024-06-10 11:48:02.850011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.821 [2024-06-10 11:48:02.861451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.821 [2024-06-10 11:48:02.861479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.821 [2024-06-10 11:48:02.861493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.821 [2024-06-10 11:48:02.875440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.821 [2024-06-10 11:48:02.875473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.821 [2024-06-10 11:48:02.875488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.821 [2024-06-10 11:48:02.888979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.821 [2024-06-10 11:48:02.889006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.821 [2024-06-10 11:48:02.889020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.821 [2024-06-10 11:48:02.901095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.821 [2024-06-10 11:48:02.901123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.821 [2024-06-10 11:48:02.901138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.821 [2024-06-10 11:48:02.914703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:37.821 [2024-06-10 11:48:02.914731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.821 [2024-06-10 11:48:02.914746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.080 [2024-06-10 11:48:02.925730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.080 [2024-06-10 11:48:02.925758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.080 [2024-06-10 11:48:02.925773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.080 [2024-06-10 11:48:02.939466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.080 [2024-06-10 11:48:02.939494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.080 [2024-06-10 11:48:02.939508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.080 [2024-06-10 11:48:02.952685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.080 [2024-06-10 11:48:02.952712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.080 [2024-06-10 11:48:02.952727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.080 [2024-06-10 11:48:02.964742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.080 [2024-06-10 11:48:02.964771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.080 [2024-06-10 11:48:02.964785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.080 [2024-06-10 11:48:02.977254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.080 [2024-06-10 11:48:02.977281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.080 [2024-06-10 11:48:02.977296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.080 [2024-06-10 11:48:02.990952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.080 [2024-06-10 11:48:02.990979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.080 [2024-06-10 11:48:02.990993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.080 [2024-06-10 11:48:03.003085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.080 [2024-06-10 11:48:03.003113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.080 [2024-06-10 11:48:03.003127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.016952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.016979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.016993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.028854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.028881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.028895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.041303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.041331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.041346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.054306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.054333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.054347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.067228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.067256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.067270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.080545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.080572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.080595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.093679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.093707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.093726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.104450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.104478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.104493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.117961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.117988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.118003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.131278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.131306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.131320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.142971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.142998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.143012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.157095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.157123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.157137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.169709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.169736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.169751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.081 [2024-06-10 11:48:03.182627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.081 [2024-06-10 11:48:03.182654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.081 [2024-06-10 11:48:03.182669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.340 [2024-06-10 11:48:03.193678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.193707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.193723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.208079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.208107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.208121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.220677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.220703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.220718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.232374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.232402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.232416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.247447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.247474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.247489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.258342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.258368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.258383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.272707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.272734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.272749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.285054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.285081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.285096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.297764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.297791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.297806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.311599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.311626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.311644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.323933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.323960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.323975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.336639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.336667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.336682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.349718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.349745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.349760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.362065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.362092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.362107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.374674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.374701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.374716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.387990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.388018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.388032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.401115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.401142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.401157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.413895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.413922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.413936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.427665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.427697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.427711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.341 [2024-06-10 11:48:03.441531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.341 [2024-06-10 11:48:03.441557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.341 [2024-06-10 11:48:03.441572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.600 [2024-06-10 11:48:03.454219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.600 [2024-06-10 11:48:03.454246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.600 [2024-06-10 11:48:03.454261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.600 [2024-06-10 11:48:03.468727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.600 [2024-06-10 11:48:03.468754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.600 [2024-06-10 11:48:03.468768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.600 [2024-06-10 11:48:03.479596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.600 [2024-06-10 11:48:03.479623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.600 [2024-06-10 11:48:03.479637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.600 [2024-06-10 11:48:03.493293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.600 [2024-06-10 11:48:03.493320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.600 [2024-06-10 11:48:03.493334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.600 [2024-06-10 11:48:03.505469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.600 [2024-06-10 11:48:03.505496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.600 [2024-06-10 11:48:03.505510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.518689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.518717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.518732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.532489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.532517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.532531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.543383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.543410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.543424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.557964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.557990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.558005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.571149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.571177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.571192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.583136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.583163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.583178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.597612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.597639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.597653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.608648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.608675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.608689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.622714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.622741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.622755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.634275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.634301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.634316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.648560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.648593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.648612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.660561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.660596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.660610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.673224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.673251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.673266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.686820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.686846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.686860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.601 [2024-06-10 11:48:03.700008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.601 [2024-06-10 11:48:03.700034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.601 [2024-06-10 11:48:03.700049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.712708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.712735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.712749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.724518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.724546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.724561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.738378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.738405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.738419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.750146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.750172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.750187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.763964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.763997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.764012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.776372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.776399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.776413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.789453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.789480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.789495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.801446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.801473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.801488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.814606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.814633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.814648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.827278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.827307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.827322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.840155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.840182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.840197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.853441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.853469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.853483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.866329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.866356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.866375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.878082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.878110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.878125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.890939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.890967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.890982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.904117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.904145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.904160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.917531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.917558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.917572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.928206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.928233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.928247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.942716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.942743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.942757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:38.861 [2024-06-10 11:48:03.955535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:38.861 [2024-06-10 11:48:03.955563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.861 [2024-06-10 11:48:03.955583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.120 [2024-06-10 11:48:03.967417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.120 [2024-06-10 11:48:03.967446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.120 [2024-06-10 11:48:03.967462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.120 [2024-06-10 11:48:03.982631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.120 [2024-06-10 11:48:03.982663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.120 [2024-06-10 11:48:03.982678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.120 [2024-06-10 11:48:03.993922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.120 [2024-06-10 11:48:03.993950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.120 [2024-06-10 11:48:03.993964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.120 [2024-06-10 11:48:04.009150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.120 [2024-06-10 11:48:04.009178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.120 [2024-06-10 11:48:04.009192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.120 [2024-06-10 11:48:04.020312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.120 [2024-06-10 11:48:04.020341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.120 [2024-06-10 11:48:04.020355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.120 [2024-06-10 11:48:04.034907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.120 [2024-06-10 11:48:04.034935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.120 [2024-06-10 11:48:04.034949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.120 [2024-06-10 11:48:04.046256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.120 [2024-06-10 11:48:04.046283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.120 [2024-06-10 11:48:04.046297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.120 [2024-06-10 11:48:04.060228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.120 [2024-06-10 11:48:04.060255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.121 [2024-06-10 11:48:04.060269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.121 [2024-06-10 11:48:04.072973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.121 [2024-06-10 11:48:04.073000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.121 [2024-06-10 11:48:04.073014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.121 [2024-06-10 11:48:04.084939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.121 [2024-06-10 11:48:04.084967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.121 [2024-06-10 11:48:04.084982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.121 [2024-06-10 11:48:04.097846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.121 [2024-06-10 11:48:04.097873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.121 [2024-06-10 11:48:04.097888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.121 [2024-06-10 11:48:04.109467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.121 [2024-06-10 11:48:04.109496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.121 [2024-06-10 11:48:04.109511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.121 [2024-06-10 11:48:04.124325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.121 [2024-06-10 11:48:04.124353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.121 [2024-06-10 11:48:04.124369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.121 [2024-06-10 11:48:04.139155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.121 [2024-06-10 11:48:04.139182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.121 [2024-06-10 11:48:04.139197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.121 [2024-06-10 11:48:04.149788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.121 [2024-06-10 11:48:04.149816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.121 [2024-06-10 11:48:04.149830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.121 [2024-06-10 11:48:04.163230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.121 [2024-06-10 11:48:04.163258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.121 [2024-06-10 11:48:04.163272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.121 [2024-06-10 11:48:04.176922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.121 [2024-06-10 11:48:04.176950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.121 [2024-06-10 11:48:04.176965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.121 [2024-06-10 11:48:04.190043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.121 [2024-06-10 11:48:04.190070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.121 [2024-06-10 11:48:04.190085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.121 [2024-06-10 11:48:04.201103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.121 [2024-06-10 11:48:04.201132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.121 [2024-06-10 11:48:04.201151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.121 [2024-06-10 11:48:04.215265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.121 [2024-06-10 11:48:04.215293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.121 [2024-06-10 11:48:04.215307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.380 [2024-06-10 11:48:04.226849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.380 [2024-06-10 11:48:04.226876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.380 [2024-06-10 11:48:04.226890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.380 [2024-06-10 11:48:04.241880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.380 [2024-06-10 11:48:04.241910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.380 [2024-06-10 11:48:04.241925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.380 [2024-06-10 11:48:04.253072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x192a9b0) 00:39:39.380 [2024-06-10 11:48:04.253100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.380 [2024-06-10 11:48:04.253115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:39.380 00:39:39.380 Latency(us) 00:39:39.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:39.380 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:39.380 nvme0n1 : 2.00 19753.49 77.16 0.00 0.00 6470.58 2949.12 17616.08 00:39:39.380 =================================================================================================================== 00:39:39.380 Total : 19753.49 77.16 0.00 0.00 6470.58 2949.12 17616.08 00:39:39.380 0 00:39:39.380 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:39.380 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:39.380 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:39.380 | .driver_specific 00:39:39.380 | .nvme_error 00:39:39.380 | .status_code 00:39:39.380 | .command_transient_transport_error' 00:39:39.380 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 155 > 0 )) 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4161258 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 4161258 ']' 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 4161258 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4161258 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4161258' 00:39:39.656 killing process with pid 4161258 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 4161258 00:39:39.656 Received shutdown signal, test time was about 2.000000 seconds 00:39:39.656 00:39:39.656 Latency(us) 00:39:39.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:39.656 =================================================================================================================== 00:39:39.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 4161258 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4162312 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4162312 /var/tmp/bperf.sock 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 4162312 ']' 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:39.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:39.656 11:48:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:39:39.915 [2024-06-10 11:48:04.802864] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:39:39.915 [2024-06-10 11:48:04.802929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4162312 ] 00:39:39.915 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:39.915 Zero copy mechanism will not be used. 00:39:39.915 EAL: No free 2048 kB hugepages reported on node 1 00:39:39.915 [2024-06-10 11:48:04.912970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.915 [2024-06-10 11:48:05.003290] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:40.853 11:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:40.853 11:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:39:40.853 11:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:40.853 11:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:40.853 11:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:40.853 11:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.853 11:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:40.853 11:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:40.853 11:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:40.853 11:48:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:41.422 nvme0n1 00:39:41.422 11:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:39:41.422 11:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:41.422 11:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:41.422 11:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:41.422 11:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:41.422 11:48:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:41.422 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:41.422 Zero copy mechanism will not be used. 00:39:41.422 Running I/O for 2 seconds... 00:39:41.681 [2024-06-10 11:48:06.538233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.681 [2024-06-10 11:48:06.538275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.681 [2024-06-10 11:48:06.538293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:41.681 [2024-06-10 11:48:06.550387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.681 [2024-06-10 11:48:06.550419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.681 [2024-06-10 11:48:06.550435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:41.681 [2024-06-10 11:48:06.562157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.681 [2024-06-10 11:48:06.562187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.681 [2024-06-10 11:48:06.562202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:41.681 [2024-06-10 11:48:06.572440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.681 [2024-06-10 11:48:06.572470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.681 [2024-06-10 11:48:06.572485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:41.681 [2024-06-10 11:48:06.582032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.582061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.582075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.593226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.593254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.593269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.603402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.603431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.603446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.617240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.617268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.617283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.628628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.628656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.628670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.641727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.641754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.641768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.653819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.653846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.653861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.665656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.665684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.665699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.676011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.676038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.676052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.688432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.688458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.688476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.699567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.699603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.699618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.710092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.710120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.710134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.720242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.720270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.720284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.729543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.729572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.729592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.740069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.740097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.740112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.750257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.750285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.750300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.760635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.760663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.760677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.770468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.770496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.770511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:41.682 [2024-06-10 11:48:06.780722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.682 [2024-06-10 11:48:06.780754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.682 [2024-06-10 11:48:06.780769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:41.942 [2024-06-10 11:48:06.790452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.942 [2024-06-10 11:48:06.790481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.942 [2024-06-10 11:48:06.790496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:41.942 [2024-06-10 11:48:06.800566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.942 [2024-06-10 11:48:06.800601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.942 [2024-06-10 11:48:06.800616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:41.942 [2024-06-10 11:48:06.810305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.942 [2024-06-10 11:48:06.810334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.942 [2024-06-10 11:48:06.810349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:41.942 [2024-06-10 11:48:06.820455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.942 [2024-06-10 11:48:06.820484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.942 [2024-06-10 11:48:06.820499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:41.942 [2024-06-10 11:48:06.830795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.942 [2024-06-10 11:48:06.830825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.942 [2024-06-10 11:48:06.830840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:41.942 [2024-06-10 11:48:06.843078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.942 [2024-06-10 11:48:06.843107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.942 [2024-06-10 11:48:06.843122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:41.942 [2024-06-10 11:48:06.856090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.942 [2024-06-10 11:48:06.856117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.942 [2024-06-10 11:48:06.856132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:41.942 [2024-06-10 11:48:06.867753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.942 [2024-06-10 11:48:06.867782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.942 [2024-06-10 11:48:06.867796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:41.942 [2024-06-10 11:48:06.882135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.942 [2024-06-10 11:48:06.882163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.942 [2024-06-10 11:48:06.882177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:41.942 [2024-06-10 11:48:06.893991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.943 [2024-06-10 11:48:06.894019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.943 [2024-06-10 11:48:06.894034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:41.943 [2024-06-10 11:48:06.904431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.943 [2024-06-10 11:48:06.904458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.943 [2024-06-10 11:48:06.904473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:41.943 [2024-06-10 11:48:06.914334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.943 [2024-06-10 11:48:06.914362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.943 [2024-06-10 11:48:06.914377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:41.943 [2024-06-10 11:48:06.924435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.943 [2024-06-10 11:48:06.924463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.943 [2024-06-10 11:48:06.924478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:41.943 [2024-06-10 11:48:06.936561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.943 [2024-06-10 11:48:06.936595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.943 [2024-06-10 11:48:06.936610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:41.943 [2024-06-10 11:48:06.950020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.943 [2024-06-10 11:48:06.950047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.943 [2024-06-10 11:48:06.950062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:41.943 [2024-06-10 11:48:06.960886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.943 [2024-06-10 11:48:06.960914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.943 [2024-06-10 11:48:06.960929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:41.943 [2024-06-10 11:48:06.971603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.943 [2024-06-10 11:48:06.971631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.943 [2024-06-10 11:48:06.971650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:41.943 [2024-06-10 11:48:06.983112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.943 [2024-06-10 11:48:06.983140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.943 [2024-06-10 11:48:06.983155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:41.943 [2024-06-10 11:48:06.996509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.943 [2024-06-10 11:48:06.996537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.943 [2024-06-10 11:48:06.996552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:41.943 [2024-06-10 11:48:07.009622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.943 [2024-06-10 11:48:07.009650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.943 [2024-06-10 11:48:07.009664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:41.943 [2024-06-10 11:48:07.022977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.943 [2024-06-10 11:48:07.023004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.943 [2024-06-10 11:48:07.023019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:41.943 [2024-06-10 11:48:07.034941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:41.943 [2024-06-10 11:48:07.034968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:41.943 [2024-06-10 11:48:07.034983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.203 [2024-06-10 11:48:07.046077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.203 [2024-06-10 11:48:07.046106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.203 [2024-06-10 11:48:07.046121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.203 [2024-06-10 11:48:07.057240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.203 [2024-06-10 11:48:07.057269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.203 [2024-06-10 11:48:07.057283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.203 [2024-06-10 11:48:07.069119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.203 [2024-06-10 11:48:07.069153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.203 [2024-06-10 11:48:07.069167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.203 [2024-06-10 11:48:07.080327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.203 [2024-06-10 11:48:07.080356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.203 [2024-06-10 11:48:07.080371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.203 [2024-06-10 11:48:07.090852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.203 [2024-06-10 11:48:07.090879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.203 [2024-06-10 11:48:07.090894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.203 [2024-06-10 11:48:07.103535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.203 [2024-06-10 11:48:07.103562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.203 [2024-06-10 11:48:07.103584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.203 [2024-06-10 11:48:07.115250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.203 [2024-06-10 11:48:07.115279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.203 [2024-06-10 11:48:07.115293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.203 [2024-06-10 11:48:07.125847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.203 [2024-06-10 11:48:07.125875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.203 [2024-06-10 11:48:07.125890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.203 [2024-06-10 11:48:07.136102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.203 [2024-06-10 11:48:07.136130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.203 [2024-06-10 11:48:07.136144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.203 [2024-06-10 11:48:07.145468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.145495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.145509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.158305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.158332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.158346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.172528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.172555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.172574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.183764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.183791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.183805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.193465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.193494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.193508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.202635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.202664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.202678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.212067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.212095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.212110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.221935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.221963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.221978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.232197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.232224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.232239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.243106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.243134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.243149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.252630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.252658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.252672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.261653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.261686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.261701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.270939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.270967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.270982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.280299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.280327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.280341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.290143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.290171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.290186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.204 [2024-06-10 11:48:07.300072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.204 [2024-06-10 11:48:07.300101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.204 [2024-06-10 11:48:07.300115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.464 [2024-06-10 11:48:07.309280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.464 [2024-06-10 11:48:07.309308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.464 [2024-06-10 11:48:07.309323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.464 [2024-06-10 11:48:07.319070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.464 [2024-06-10 11:48:07.319099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.464 [2024-06-10 11:48:07.319113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.464 [2024-06-10 11:48:07.329399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.464 [2024-06-10 11:48:07.329426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.464 [2024-06-10 11:48:07.329440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.464 [2024-06-10 11:48:07.338605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.464 [2024-06-10 11:48:07.338633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.464 [2024-06-10 11:48:07.338647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.464 [2024-06-10 11:48:07.348912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.464 [2024-06-10 11:48:07.348941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.464 [2024-06-10 11:48:07.348956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.464 [2024-06-10 11:48:07.358700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.464 [2024-06-10 11:48:07.358727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.464 [2024-06-10 11:48:07.358742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.464 [2024-06-10 11:48:07.368179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.464 [2024-06-10 11:48:07.368208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.464 [2024-06-10 11:48:07.368222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.464 [2024-06-10 11:48:07.377210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.464 [2024-06-10 11:48:07.377238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.464 [2024-06-10 11:48:07.377252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.464 [2024-06-10 11:48:07.385305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.464 [2024-06-10 11:48:07.385332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.464 [2024-06-10 11:48:07.385346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.464 [2024-06-10 11:48:07.393359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.464 [2024-06-10 11:48:07.393386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.464 [2024-06-10 11:48:07.393400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.464 [2024-06-10 11:48:07.401390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.464 [2024-06-10 11:48:07.401417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.464 [2024-06-10 11:48:07.401431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.464 [2024-06-10 11:48:07.409619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.464 [2024-06-10 11:48:07.409646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.409660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.417686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.417713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.417731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.425876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.425904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.425918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.433841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.433870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.433884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.441850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.441877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.441891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.449928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.449955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.449969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.457978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.458005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.458019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.466030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.466058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.466072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.474191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.474219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.474233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.482176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.482202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.482216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.490210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.490241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.490255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.498264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.498292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.498307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.506334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.506362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.506382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.514519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.514546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.514562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.522588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.522615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.522630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.530571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.530604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.530619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.538780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.538807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.538821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.546943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.546970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.546984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.554973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.554999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.555013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.465 [2024-06-10 11:48:07.564281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.465 [2024-06-10 11:48:07.564309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.465 [2024-06-10 11:48:07.564323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.725 [2024-06-10 11:48:07.574559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.725 [2024-06-10 11:48:07.574593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.725 [2024-06-10 11:48:07.574608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.725 [2024-06-10 11:48:07.585457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.725 [2024-06-10 11:48:07.585485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.725 [2024-06-10 11:48:07.585499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.725 [2024-06-10 11:48:07.595980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.725 [2024-06-10 11:48:07.596008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.725 [2024-06-10 11:48:07.596023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.725 [2024-06-10 11:48:07.606933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.725 [2024-06-10 11:48:07.606962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.725 [2024-06-10 11:48:07.606977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.725 [2024-06-10 11:48:07.618314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.725 [2024-06-10 11:48:07.618342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.725 [2024-06-10 11:48:07.618357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.725 [2024-06-10 11:48:07.628998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.725 [2024-06-10 11:48:07.629027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.725 [2024-06-10 11:48:07.629042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.725 [2024-06-10 11:48:07.639674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.725 [2024-06-10 11:48:07.639704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.725 [2024-06-10 11:48:07.639718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.725 [2024-06-10 11:48:07.650507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.725 [2024-06-10 11:48:07.650541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.725 [2024-06-10 11:48:07.650555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.725 [2024-06-10 11:48:07.661041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.725 [2024-06-10 11:48:07.661071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.725 [2024-06-10 11:48:07.661086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.725 [2024-06-10 11:48:07.671650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.725 [2024-06-10 11:48:07.671679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.725 [2024-06-10 11:48:07.671694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.725 [2024-06-10 11:48:07.681989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.725 [2024-06-10 11:48:07.682018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.725 [2024-06-10 11:48:07.682033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.725 [2024-06-10 11:48:07.692801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.692831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.692845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.726 [2024-06-10 11:48:07.703992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.704021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.704036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.726 [2024-06-10 11:48:07.713991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.714021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.714035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.726 [2024-06-10 11:48:07.724602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.724632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.724647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.726 [2024-06-10 11:48:07.735378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.735408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.735423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.726 [2024-06-10 11:48:07.745801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.745831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.745845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.726 [2024-06-10 11:48:07.756774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.756804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.756818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.726 [2024-06-10 11:48:07.766838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.766867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.766882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.726 [2024-06-10 11:48:07.776223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.776253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.776268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.726 [2024-06-10 11:48:07.785677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.785706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.785720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.726 [2024-06-10 11:48:07.795160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.795190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.795204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.726 [2024-06-10 11:48:07.804790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.804820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.804835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.726 [2024-06-10 11:48:07.813986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.814016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.814031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.726 [2024-06-10 11:48:07.822382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.726 [2024-06-10 11:48:07.822412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.726 [2024-06-10 11:48:07.822431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.830556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.830724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.830740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.838843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.838872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.838886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.847148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.847177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.847192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.856026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.856056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.856071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.865366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.865395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.865410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.875147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.875176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.875190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.884418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.884447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.884461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.893147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.893177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.893191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.901363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.901396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.901410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.909545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.909574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.909596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.917625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.917652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.917666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.925749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.925778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.925792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.933915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.933943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.933957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.942036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.942064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.942078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.950313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.950341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.950356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.958359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.958387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.958401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.966470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.966499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.966513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.974590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.974618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.974632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.982771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.986 [2024-06-10 11:48:07.982799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.986 [2024-06-10 11:48:07.982813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.986 [2024-06-10 11:48:07.990847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.987 [2024-06-10 11:48:07.990876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.987 [2024-06-10 11:48:07.990890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.987 [2024-06-10 11:48:07.998899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.987 [2024-06-10 11:48:07.998927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.987 [2024-06-10 11:48:07.998942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.987 [2024-06-10 11:48:08.007100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.987 [2024-06-10 11:48:08.007129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.987 [2024-06-10 11:48:08.007143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.987 [2024-06-10 11:48:08.015453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.987 [2024-06-10 11:48:08.015480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.987 [2024-06-10 11:48:08.015495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.987 [2024-06-10 11:48:08.023599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.987 [2024-06-10 11:48:08.023626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.987 [2024-06-10 11:48:08.023640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.987 [2024-06-10 11:48:08.031710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.987 [2024-06-10 11:48:08.031738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.987 [2024-06-10 11:48:08.031752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.987 [2024-06-10 11:48:08.039819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.987 [2024-06-10 11:48:08.039847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.987 [2024-06-10 11:48:08.039866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.987 [2024-06-10 11:48:08.047933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.987 [2024-06-10 11:48:08.047962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.987 [2024-06-10 11:48:08.047975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.987 [2024-06-10 11:48:08.056030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.987 [2024-06-10 11:48:08.056058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.987 [2024-06-10 11:48:08.056072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.987 [2024-06-10 11:48:08.064172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.987 [2024-06-10 11:48:08.064200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.987 [2024-06-10 11:48:08.064214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.987 [2024-06-10 11:48:08.072227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.987 [2024-06-10 11:48:08.072257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.987 [2024-06-10 11:48:08.072272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:42.987 [2024-06-10 11:48:08.080266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.987 [2024-06-10 11:48:08.080295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.987 [2024-06-10 11:48:08.080309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.987 [2024-06-10 11:48:08.088300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:42.987 [2024-06-10 11:48:08.088328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:42.987 [2024-06-10 11:48:08.088343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.096315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.248 [2024-06-10 11:48:08.096343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.248 [2024-06-10 11:48:08.096357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.104362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.248 [2024-06-10 11:48:08.104390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.248 [2024-06-10 11:48:08.104405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.112423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.248 [2024-06-10 11:48:08.112455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.248 [2024-06-10 11:48:08.112470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.120491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.248 [2024-06-10 11:48:08.120520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.248 [2024-06-10 11:48:08.120535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.128541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.248 [2024-06-10 11:48:08.128568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.248 [2024-06-10 11:48:08.128590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.136667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.248 [2024-06-10 11:48:08.136695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.248 [2024-06-10 11:48:08.136709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.144741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.248 [2024-06-10 11:48:08.144768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.248 [2024-06-10 11:48:08.144783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.152928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.248 [2024-06-10 11:48:08.152956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.248 [2024-06-10 11:48:08.152971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.161128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.248 [2024-06-10 11:48:08.161155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.248 [2024-06-10 11:48:08.161169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.169345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.248 [2024-06-10 11:48:08.169373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.248 [2024-06-10 11:48:08.169388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.177428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.248 [2024-06-10 11:48:08.177456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.248 [2024-06-10 11:48:08.177470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.185498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.248 [2024-06-10 11:48:08.185526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.248 [2024-06-10 11:48:08.185540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.193608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.248 [2024-06-10 11:48:08.193635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.248 [2024-06-10 11:48:08.193650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.248 [2024-06-10 11:48:08.201670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.201697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.201711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.209753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.209781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.209795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.217784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.217812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.217827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.225948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.225976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.225991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.234026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.234054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.234068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.242105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.242133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.242148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.250264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.250292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.250310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.258410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.258438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.258452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.266502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.266530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.266545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.274689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.274716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.274731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.282766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.282794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.282808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.290824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.290852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.290866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.298934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.298962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.298976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.306993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.307021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.307035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.314984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.315011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.315025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.323018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.323047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.323062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.331050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.331078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.331092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.339183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.339211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.339225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.249 [2024-06-10 11:48:08.347386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.249 [2024-06-10 11:48:08.347414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.249 [2024-06-10 11:48:08.347428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.355485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.355512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.355527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.363557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.363592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.363607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.371707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.371734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.371749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.379731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.379758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.379773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.387737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.387764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.387782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.395749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.395776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.395791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.403806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.403834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.403849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.411931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.411959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.411974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.420108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.420136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.420150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.428249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.428277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.428291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.436392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.436419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.436433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.444653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.444680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.444695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.452760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.452789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.452804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.460867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.460900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.460915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.509 [2024-06-10 11:48:08.469042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.509 [2024-06-10 11:48:08.469070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.509 [2024-06-10 11:48:08.469084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.510 [2024-06-10 11:48:08.477261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.510 [2024-06-10 11:48:08.477289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.510 [2024-06-10 11:48:08.477303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.510 [2024-06-10 11:48:08.485332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.510 [2024-06-10 11:48:08.485360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.510 [2024-06-10 11:48:08.485374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.510 [2024-06-10 11:48:08.493402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.510 [2024-06-10 11:48:08.493430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.510 [2024-06-10 11:48:08.493444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.510 [2024-06-10 11:48:08.501533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.510 [2024-06-10 11:48:08.501561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.510 [2024-06-10 11:48:08.501582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.510 [2024-06-10 11:48:08.509716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.510 [2024-06-10 11:48:08.509744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.510 [2024-06-10 11:48:08.509758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:43.510 [2024-06-10 11:48:08.517876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xddc280) 00:39:43.510 [2024-06-10 11:48:08.517904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:43.510 [2024-06-10 11:48:08.517918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.510 00:39:43.510 Latency(us) 00:39:43.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:43.510 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:39:43.510 nvme0n1 : 2.00 3275.31 409.41 0.00 0.00 4881.36 1612.19 16882.07 00:39:43.510 =================================================================================================================== 00:39:43.510 Total : 3275.31 409.41 0.00 0.00 4881.36 1612.19 16882.07 00:39:43.510 0 00:39:43.510 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:43.510 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:43.510 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:43.510 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:43.510 | .driver_specific 00:39:43.510 | .nvme_error 00:39:43.510 | .status_code 00:39:43.510 | .command_transient_transport_error' 00:39:43.769 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 211 > 0 )) 00:39:43.769 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4162312 00:39:43.769 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 4162312 ']' 00:39:43.769 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 4162312 00:39:43.769 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:39:43.769 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:43.769 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4162312 00:39:43.769 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:43.769 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:43.769 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4162312' 00:39:43.769 killing process with pid 4162312 00:39:43.769 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 4162312 00:39:43.769 Received shutdown signal, test time was about 2.000000 seconds 00:39:43.769 00:39:43.769 Latency(us) 00:39:43.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:43.769 =================================================================================================================== 00:39:43.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:43.769 11:48:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 4162312 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4163121 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4163121 /var/tmp/bperf.sock 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 4163121 ']' 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:44.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:44.029 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:44.029 [2024-06-10 11:48:09.080407] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:39:44.029 [2024-06-10 11:48:09.080474] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163121 ] 00:39:44.288 EAL: No free 2048 kB hugepages reported on node 1 00:39:44.288 [2024-06-10 11:48:09.190804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:44.288 [2024-06-10 11:48:09.273488] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:45.225 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:45.225 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:39:45.225 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:45.225 11:48:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:45.225 11:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:45.225 11:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:45.225 11:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:45.225 11:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:45.225 11:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:45.225 11:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:45.794 nvme0n1 00:39:45.794 11:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:39:45.794 11:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:45.794 11:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:45.794 11:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:45.794 11:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:45.794 11:48:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:45.794 Running I/O for 2 seconds... 00:39:45.794 [2024-06-10 11:48:10.815215] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f7da8 00:39:45.794 [2024-06-10 11:48:10.816233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:45.794 [2024-06-10 11:48:10.816269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:45.794 [2024-06-10 11:48:10.829021] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f1868 00:39:45.794 [2024-06-10 11:48:10.830452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:45.794 [2024-06-10 11:48:10.830480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:39:45.794 [2024-06-10 11:48:10.843864] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e01f8 00:39:45.794 [2024-06-10 11:48:10.846027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:45.794 [2024-06-10 11:48:10.846053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:39:45.794 [2024-06-10 11:48:10.852652] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190fd208 00:39:45.794 [2024-06-10 11:48:10.853568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:45.794 [2024-06-10 11:48:10.853598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:39:45.794 [2024-06-10 11:48:10.865659] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e38d0 00:39:45.794 [2024-06-10 11:48:10.866832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:45.794 [2024-06-10 11:48:10.866858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:39:45.794 [2024-06-10 11:48:10.878021] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e1b48 00:39:45.794 [2024-06-10 11:48:10.879194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:45.794 [2024-06-10 11:48:10.879218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:39:45.794 [2024-06-10 11:48:10.891516] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e38d0 00:39:45.794 [2024-06-10 11:48:10.893208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:45.794 [2024-06-10 11:48:10.893234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:39:46.054 [2024-06-10 11:48:10.902498] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190fc998 00:39:46.054 [2024-06-10 11:48:10.903239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.054 [2024-06-10 11:48:10.903264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:46.054 [2024-06-10 11:48:10.915084] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190fa7d8 00:39:46.054 [2024-06-10 11:48:10.916149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.054 [2024-06-10 11:48:10.916174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:39:46.054 [2024-06-10 11:48:10.928055] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190fc560 00:39:46.054 [2024-06-10 11:48:10.929349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.054 [2024-06-10 11:48:10.929373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:39:46.054 [2024-06-10 11:48:10.940419] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e3d08 00:39:46.054 [2024-06-10 11:48:10.941736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.054 [2024-06-10 11:48:10.941760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:39:46.054 [2024-06-10 11:48:10.952775] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f2948 00:39:46.054 [2024-06-10 11:48:10.954084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.054 [2024-06-10 11:48:10.954108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:39:46.054 [2024-06-10 11:48:10.965171] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190fc560 00:39:46.054 [2024-06-10 11:48:10.966469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:10.966494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:10.977710] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ef6a8 00:39:46.055 [2024-06-10 11:48:10.978940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:10.978965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:10.990047] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ff3c8 00:39:46.055 [2024-06-10 11:48:10.991387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:10.991412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:11.002372] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ea680 00:39:46.055 [2024-06-10 11:48:11.003689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:11.003714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:11.014671] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190eb760 00:39:46.055 [2024-06-10 11:48:11.015987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:11.016012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:11.026964] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e5ec8 00:39:46.055 [2024-06-10 11:48:11.028280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:11.028304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:11.039253] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f5be8 00:39:46.055 [2024-06-10 11:48:11.040595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:11.040619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:11.051572] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190fe2e8 00:39:46.055 [2024-06-10 11:48:11.052826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:11.052855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:11.063899] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e2c28 00:39:46.055 [2024-06-10 11:48:11.065131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:11.065156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:11.076203] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e3d08 00:39:46.055 [2024-06-10 11:48:11.077542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:11.077567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:11.088493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e4de8 00:39:46.055 [2024-06-10 11:48:11.089831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:11.089859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:11.100791] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190efae0 00:39:46.055 [2024-06-10 11:48:11.102105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:11.102130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:11.113101] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190df988 00:39:46.055 [2024-06-10 11:48:11.114460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:11.114484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:11.125417] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f7538 00:39:46.055 [2024-06-10 11:48:11.126742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:11.126766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:11.137735] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ed920 00:39:46.055 [2024-06-10 11:48:11.139063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:11.139087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.055 [2024-06-10 11:48:11.150026] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ec840 00:39:46.055 [2024-06-10 11:48:11.151357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.055 [2024-06-10 11:48:11.151381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.315 [2024-06-10 11:48:11.162319] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f1868 00:39:46.315 [2024-06-10 11:48:11.163659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.315 [2024-06-10 11:48:11.163684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.315 [2024-06-10 11:48:11.174668] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f2948 00:39:46.316 [2024-06-10 11:48:11.175978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.176002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.186990] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190eea00 00:39:46.316 [2024-06-10 11:48:11.188292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.188316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.199322] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e1b48 00:39:46.316 [2024-06-10 11:48:11.200650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.200674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.211630] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190eb328 00:39:46.316 [2024-06-10 11:48:11.212972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.212996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.223945] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e6300 00:39:46.316 [2024-06-10 11:48:11.225277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.225301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.236240] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f6020 00:39:46.316 [2024-06-10 11:48:11.237592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.237615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.248552] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ddc00 00:39:46.316 [2024-06-10 11:48:11.249883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.249907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.260879] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e27f0 00:39:46.316 [2024-06-10 11:48:11.262203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.262227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.273199] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e38d0 00:39:46.316 [2024-06-10 11:48:11.274531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.274556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.285518] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e49b0 00:39:46.316 [2024-06-10 11:48:11.286841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.286865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.297842] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f4298 00:39:46.316 [2024-06-10 11:48:11.299076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.299101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.310168] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190dece0 00:39:46.316 [2024-06-10 11:48:11.311527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.311551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.322478] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f81e0 00:39:46.316 [2024-06-10 11:48:11.323831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.323855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.334806] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f7100 00:39:46.316 [2024-06-10 11:48:11.336128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.336152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.347099] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ed4e8 00:39:46.316 [2024-06-10 11:48:11.348404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.348429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.359386] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f0bc0 00:39:46.316 [2024-06-10 11:48:11.360714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.360739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.371682] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f1ca0 00:39:46.316 [2024-06-10 11:48:11.373001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.373028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.383979] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ef6a8 00:39:46.316 [2024-06-10 11:48:11.385305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.385329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.396263] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ff3c8 00:39:46.316 [2024-06-10 11:48:11.397596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.397621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.316 [2024-06-10 11:48:11.408848] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ea680 00:39:46.316 [2024-06-10 11:48:11.410182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.316 [2024-06-10 11:48:11.410206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.576 [2024-06-10 11:48:11.421141] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190eb760 00:39:46.576 [2024-06-10 11:48:11.422462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.576 [2024-06-10 11:48:11.422486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.576 [2024-06-10 11:48:11.433410] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e5ec8 00:39:46.576 [2024-06-10 11:48:11.434645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.576 [2024-06-10 11:48:11.434669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.576 [2024-06-10 11:48:11.445722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f5be8 00:39:46.576 [2024-06-10 11:48:11.447049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.576 [2024-06-10 11:48:11.447074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.576 [2024-06-10 11:48:11.458017] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190fe2e8 00:39:46.576 [2024-06-10 11:48:11.459333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.576 [2024-06-10 11:48:11.459357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.576 [2024-06-10 11:48:11.470311] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e2c28 00:39:46.576 [2024-06-10 11:48:11.471629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.576 [2024-06-10 11:48:11.471653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.576 [2024-06-10 11:48:11.482612] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e3d08 00:39:46.576 [2024-06-10 11:48:11.483944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.576 [2024-06-10 11:48:11.483968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.576 [2024-06-10 11:48:11.495034] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e4de8 00:39:46.576 [2024-06-10 11:48:11.496367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.576 [2024-06-10 11:48:11.496392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.576 [2024-06-10 11:48:11.507319] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190efae0 00:39:46.576 [2024-06-10 11:48:11.508640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.576 [2024-06-10 11:48:11.508665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.576 [2024-06-10 11:48:11.519614] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190df988 00:39:46.577 [2024-06-10 11:48:11.520857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.577 [2024-06-10 11:48:11.520881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.577 [2024-06-10 11:48:11.531960] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f7538 00:39:46.577 [2024-06-10 11:48:11.533200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.577 [2024-06-10 11:48:11.533225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.577 [2024-06-10 11:48:11.544283] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ed920 00:39:46.577 [2024-06-10 11:48:11.545517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.577 [2024-06-10 11:48:11.545541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.577 [2024-06-10 11:48:11.556597] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ec840 00:39:46.577 [2024-06-10 11:48:11.557831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.577 [2024-06-10 11:48:11.557854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.577 [2024-06-10 11:48:11.568879] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f1868 00:39:46.577 [2024-06-10 11:48:11.570110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.577 [2024-06-10 11:48:11.570133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.577 [2024-06-10 11:48:11.581168] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f2948 00:39:46.577 [2024-06-10 11:48:11.582498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.577 [2024-06-10 11:48:11.582522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.577 [2024-06-10 11:48:11.593453] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190eea00 00:39:46.577 [2024-06-10 11:48:11.594775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.577 [2024-06-10 11:48:11.594799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.577 [2024-06-10 11:48:11.605728] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e1b48 00:39:46.577 [2024-06-10 11:48:11.607065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.577 [2024-06-10 11:48:11.607089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.577 [2024-06-10 11:48:11.618006] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190eb328 00:39:46.577 [2024-06-10 11:48:11.619329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.577 [2024-06-10 11:48:11.619353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.577 [2024-06-10 11:48:11.630278] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e6300 00:39:46.577 [2024-06-10 11:48:11.631602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.577 [2024-06-10 11:48:11.631626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.577 [2024-06-10 11:48:11.642560] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f6020 00:39:46.577 [2024-06-10 11:48:11.643795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.577 [2024-06-10 11:48:11.643818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.577 [2024-06-10 11:48:11.654853] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ddc00 00:39:46.577 [2024-06-10 11:48:11.656157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.577 [2024-06-10 11:48:11.656181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.577 [2024-06-10 11:48:11.667183] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e27f0 00:39:46.577 [2024-06-10 11:48:11.668492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.577 [2024-06-10 11:48:11.668515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.577 [2024-06-10 11:48:11.679463] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e38d0 00:39:46.837 [2024-06-10 11:48:11.680799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.680823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.691751] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e49b0 00:39:46.837 [2024-06-10 11:48:11.693077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.693108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.704027] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f4298 00:39:46.837 [2024-06-10 11:48:11.705356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.705380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.716310] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190dece0 00:39:46.837 [2024-06-10 11:48:11.717631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.717655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.728609] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f81e0 00:39:46.837 [2024-06-10 11:48:11.729923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.729947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.740875] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f7100 00:39:46.837 [2024-06-10 11:48:11.742206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.742229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.753139] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ed4e8 00:39:46.837 [2024-06-10 11:48:11.754450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.754474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.765403] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f0bc0 00:39:46.837 [2024-06-10 11:48:11.766719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.766744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.777681] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f1ca0 00:39:46.837 [2024-06-10 11:48:11.778992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.779016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.789976] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ef6a8 00:39:46.837 [2024-06-10 11:48:11.791302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.791326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.802272] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ff3c8 00:39:46.837 [2024-06-10 11:48:11.803587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.803612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.814546] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ea680 00:39:46.837 [2024-06-10 11:48:11.815859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.815884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.826814] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190eb760 00:39:46.837 [2024-06-10 11:48:11.828118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.828143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.839084] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e5ec8 00:39:46.837 [2024-06-10 11:48:11.840407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.840431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.851366] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f5be8 00:39:46.837 [2024-06-10 11:48:11.852689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.852714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.863667] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190fe2e8 00:39:46.837 [2024-06-10 11:48:11.864990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.865014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.875932] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e2c28 00:39:46.837 [2024-06-10 11:48:11.877272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.877297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.888189] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e3d08 00:39:46.837 [2024-06-10 11:48:11.889523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.889548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.900477] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e4de8 00:39:46.837 [2024-06-10 11:48:11.901813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.901838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.912771] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190efae0 00:39:46.837 [2024-06-10 11:48:11.914098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.837 [2024-06-10 11:48:11.914123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.837 [2024-06-10 11:48:11.925054] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190df988 00:39:46.837 [2024-06-10 11:48:11.926373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.838 [2024-06-10 11:48:11.926397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:46.838 [2024-06-10 11:48:11.937357] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f7538 00:39:46.838 [2024-06-10 11:48:11.938681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:46.838 [2024-06-10 11:48:11.938705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.097 [2024-06-10 11:48:11.949639] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ed920 00:39:47.097 [2024-06-10 11:48:11.950979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.097 [2024-06-10 11:48:11.951003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.097 [2024-06-10 11:48:11.961896] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ec840 00:39:47.097 [2024-06-10 11:48:11.963225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.097 [2024-06-10 11:48:11.963249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.097 [2024-06-10 11:48:11.974204] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f1868 00:39:47.097 [2024-06-10 11:48:11.975539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.097 [2024-06-10 11:48:11.975563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.097 [2024-06-10 11:48:11.986751] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f2948 00:39:47.097 [2024-06-10 11:48:11.988060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.097 [2024-06-10 11:48:11.988086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.097 [2024-06-10 11:48:11.999076] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190eea00 00:39:47.097 [2024-06-10 11:48:12.000404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.097 [2024-06-10 11:48:12.000430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.097 [2024-06-10 11:48:12.011369] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e1b48 00:39:47.097 [2024-06-10 11:48:12.012694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.097 [2024-06-10 11:48:12.012723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.097 [2024-06-10 11:48:12.023640] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190eb328 00:39:47.097 [2024-06-10 11:48:12.024941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.097 [2024-06-10 11:48:12.024965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.035898] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e6300 00:39:47.098 [2024-06-10 11:48:12.037200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.037224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.048207] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f6020 00:39:47.098 [2024-06-10 11:48:12.049516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.049541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.060492] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ddc00 00:39:47.098 [2024-06-10 11:48:12.061815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.061840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.072777] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e27f0 00:39:47.098 [2024-06-10 11:48:12.074085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.074109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.085055] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e38d0 00:39:47.098 [2024-06-10 11:48:12.086368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.086392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.097335] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e49b0 00:39:47.098 [2024-06-10 11:48:12.098642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.098667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.109615] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f4298 00:39:47.098 [2024-06-10 11:48:12.110927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.110952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.121886] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190dece0 00:39:47.098 [2024-06-10 11:48:12.123190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.123220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.134163] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f81e0 00:39:47.098 [2024-06-10 11:48:12.135465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.135490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.146437] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f7100 00:39:47.098 [2024-06-10 11:48:12.147759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.147783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.158695] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ed4e8 00:39:47.098 [2024-06-10 11:48:12.160009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.160033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.170967] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f0bc0 00:39:47.098 [2024-06-10 11:48:12.172292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.172316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.183266] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f1ca0 00:39:47.098 [2024-06-10 11:48:12.184566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.184596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.098 [2024-06-10 11:48:12.195533] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ef6a8 00:39:47.098 [2024-06-10 11:48:12.196841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.098 [2024-06-10 11:48:12.196866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.207811] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ff3c8 00:39:47.358 [2024-06-10 11:48:12.209117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.209141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.220079] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ea680 00:39:47.358 [2024-06-10 11:48:12.221375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.221400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.232340] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190eb760 00:39:47.358 [2024-06-10 11:48:12.233646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.233670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.244631] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e5ec8 00:39:47.358 [2024-06-10 11:48:12.245937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.245961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.256905] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f5be8 00:39:47.358 [2024-06-10 11:48:12.258211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.258235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.269184] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190fe2e8 00:39:47.358 [2024-06-10 11:48:12.270490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.270513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.281444] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e2c28 00:39:47.358 [2024-06-10 11:48:12.282754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.282778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.293736] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e3d08 00:39:47.358 [2024-06-10 11:48:12.295043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.295067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.306002] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e4de8 00:39:47.358 [2024-06-10 11:48:12.307301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.307324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.318283] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190efae0 00:39:47.358 [2024-06-10 11:48:12.319602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.319631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.330556] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190df988 00:39:47.358 [2024-06-10 11:48:12.331860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.331884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.342850] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f7538 00:39:47.358 [2024-06-10 11:48:12.344181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.344206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.355120] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ed920 00:39:47.358 [2024-06-10 11:48:12.356445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.356469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.367388] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ec840 00:39:47.358 [2024-06-10 11:48:12.368715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.368738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.379683] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f1868 00:39:47.358 [2024-06-10 11:48:12.381011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.381036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.391974] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f2948 00:39:47.358 [2024-06-10 11:48:12.393307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.393332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.358 [2024-06-10 11:48:12.404256] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190eea00 00:39:47.358 [2024-06-10 11:48:12.405846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.358 [2024-06-10 11:48:12.405871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.359 [2024-06-10 11:48:12.416832] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e1b48 00:39:47.359 [2024-06-10 11:48:12.418165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.359 [2024-06-10 11:48:12.418189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.359 [2024-06-10 11:48:12.429127] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190eb328 00:39:47.359 [2024-06-10 11:48:12.430450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.359 [2024-06-10 11:48:12.430474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.359 [2024-06-10 11:48:12.441409] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e6300 00:39:47.359 [2024-06-10 11:48:12.442712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.359 [2024-06-10 11:48:12.442747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.359 [2024-06-10 11:48:12.453712] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f6020 00:39:47.359 [2024-06-10 11:48:12.455019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.359 [2024-06-10 11:48:12.455043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.618 [2024-06-10 11:48:12.466015] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ddc00 00:39:47.618 [2024-06-10 11:48:12.467246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.618 [2024-06-10 11:48:12.467270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.618 [2024-06-10 11:48:12.478322] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e27f0 00:39:47.618 [2024-06-10 11:48:12.479550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.618 [2024-06-10 11:48:12.479579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.618 [2024-06-10 11:48:12.490625] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e38d0 00:39:47.618 [2024-06-10 11:48:12.491963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.618 [2024-06-10 11:48:12.491987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.618 [2024-06-10 11:48:12.503007] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e49b0 00:39:47.618 [2024-06-10 11:48:12.504346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.618 [2024-06-10 11:48:12.504370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.618 [2024-06-10 11:48:12.515319] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f4298 00:39:47.618 [2024-06-10 11:48:12.516643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.618 [2024-06-10 11:48:12.516667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.618 [2024-06-10 11:48:12.527618] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190dece0 00:39:47.618 [2024-06-10 11:48:12.528924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.618 [2024-06-10 11:48:12.528947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.618 [2024-06-10 11:48:12.539932] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f81e0 00:39:47.618 [2024-06-10 11:48:12.541178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.618 [2024-06-10 11:48:12.541202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.618 [2024-06-10 11:48:12.552236] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f7100 00:39:47.618 [2024-06-10 11:48:12.553564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.618 [2024-06-10 11:48:12.553593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.618 [2024-06-10 11:48:12.564520] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ed4e8 00:39:47.618 [2024-06-10 11:48:12.565794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.618 [2024-06-10 11:48:12.565818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.618 [2024-06-10 11:48:12.576850] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f0bc0 00:39:47.618 [2024-06-10 11:48:12.578118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.618 [2024-06-10 11:48:12.578141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.618 [2024-06-10 11:48:12.589143] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f1ca0 00:39:47.618 [2024-06-10 11:48:12.590469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.618 [2024-06-10 11:48:12.590493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.618 [2024-06-10 11:48:12.601431] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ef6a8 00:39:47.618 [2024-06-10 11:48:12.602761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.619 [2024-06-10 11:48:12.602785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.619 [2024-06-10 11:48:12.613741] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ff3c8 00:39:47.619 [2024-06-10 11:48:12.615073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.619 [2024-06-10 11:48:12.615097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.619 [2024-06-10 11:48:12.626085] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ea680 00:39:47.619 [2024-06-10 11:48:12.627389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.619 [2024-06-10 11:48:12.627413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.619 [2024-06-10 11:48:12.638358] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190eb760 00:39:47.619 [2024-06-10 11:48:12.639687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.619 [2024-06-10 11:48:12.639712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.619 [2024-06-10 11:48:12.650659] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e5ec8 00:39:47.619 [2024-06-10 11:48:12.651983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.619 [2024-06-10 11:48:12.652007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.619 [2024-06-10 11:48:12.662974] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f5be8 00:39:47.619 [2024-06-10 11:48:12.664307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.619 [2024-06-10 11:48:12.664331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.619 [2024-06-10 11:48:12.675261] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190fe2e8 00:39:47.619 [2024-06-10 11:48:12.676567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.619 [2024-06-10 11:48:12.676596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.619 [2024-06-10 11:48:12.687547] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e2c28 00:39:47.619 [2024-06-10 11:48:12.688819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.619 [2024-06-10 11:48:12.688843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.619 [2024-06-10 11:48:12.699834] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e3d08 00:39:47.619 [2024-06-10 11:48:12.701185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.619 [2024-06-10 11:48:12.701209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.619 [2024-06-10 11:48:12.712125] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190e4de8 00:39:47.619 [2024-06-10 11:48:12.713395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.619 [2024-06-10 11:48:12.713419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.878 [2024-06-10 11:48:12.724418] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190efae0 00:39:47.878 [2024-06-10 11:48:12.725745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.878 [2024-06-10 11:48:12.725769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.878 [2024-06-10 11:48:12.736734] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190df988 00:39:47.878 [2024-06-10 11:48:12.737999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.878 [2024-06-10 11:48:12.738023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.878 [2024-06-10 11:48:12.749039] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f7538 00:39:47.878 [2024-06-10 11:48:12.750351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.878 [2024-06-10 11:48:12.750375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.878 [2024-06-10 11:48:12.761316] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ed920 00:39:47.878 [2024-06-10 11:48:12.762645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.878 [2024-06-10 11:48:12.762672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.878 [2024-06-10 11:48:12.773612] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190ec840 00:39:47.878 [2024-06-10 11:48:12.774856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.878 [2024-06-10 11:48:12.774880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.878 [2024-06-10 11:48:12.785932] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f1868 00:39:47.878 [2024-06-10 11:48:12.787165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.878 [2024-06-10 11:48:12.787188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.878 [2024-06-10 11:48:12.798249] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219900) with pdu=0x2000190f2948 00:39:47.878 [2024-06-10 11:48:12.799588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:47.878 [2024-06-10 11:48:12.799612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:47.878 00:39:47.878 Latency(us) 00:39:47.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:47.878 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:47.878 nvme0n1 : 2.00 20659.62 80.70 0.00 0.00 6186.01 3106.41 18140.36 00:39:47.878 =================================================================================================================== 00:39:47.878 Total : 20659.62 80.70 0.00 0.00 6186.01 3106.41 18140.36 00:39:47.878 0 00:39:47.878 11:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:47.878 11:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:47.878 11:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:47.878 | .driver_specific 00:39:47.878 | .nvme_error 00:39:47.878 | .status_code 00:39:47.878 | .command_transient_transport_error' 00:39:47.878 11:48:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:48.137 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:39:48.137 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4163121 00:39:48.138 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 4163121 ']' 00:39:48.138 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 4163121 00:39:48.138 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:39:48.138 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:48.138 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4163121 00:39:48.138 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:48.138 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:48.138 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4163121' 00:39:48.138 killing process with pid 4163121 00:39:48.138 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 4163121 00:39:48.138 Received shutdown signal, test time was about 2.000000 seconds 00:39:48.138 00:39:48.138 Latency(us) 00:39:48.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:48.138 =================================================================================================================== 00:39:48.138 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:48.138 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 4163121 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4163924 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4163924 /var/tmp/bperf.sock 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 4163924 ']' 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:48.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:48.397 11:48:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:48.397 [2024-06-10 11:48:13.347547] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:39:48.397 [2024-06-10 11:48:13.347648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163924 ] 00:39:48.397 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:48.397 Zero copy mechanism will not be used. 00:39:48.397 EAL: No free 2048 kB hugepages reported on node 1 00:39:48.397 [2024-06-10 11:48:13.458529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.657 [2024-06-10 11:48:13.544937] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:49.225 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:49.225 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:39:49.225 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:49.225 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:49.484 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:49.484 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:49.484 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:49.484 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:49.484 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:49.484 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:50.052 nvme0n1 00:39:50.052 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:39:50.052 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.053 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:50.053 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.053 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:50.053 11:48:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:50.053 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:50.053 Zero copy mechanism will not be used. 00:39:50.053 Running I/O for 2 seconds... 00:39:50.053 [2024-06-10 11:48:15.036311] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.053 [2024-06-10 11:48:15.036771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.053 [2024-06-10 11:48:15.036808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.053 [2024-06-10 11:48:15.048933] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.053 [2024-06-10 11:48:15.049369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.053 [2024-06-10 11:48:15.049400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.053 [2024-06-10 11:48:15.059052] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.053 [2024-06-10 11:48:15.059468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.053 [2024-06-10 11:48:15.059498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.053 [2024-06-10 11:48:15.067551] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.053 [2024-06-10 11:48:15.068005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.053 [2024-06-10 11:48:15.068033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.053 [2024-06-10 11:48:15.076915] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.053 [2024-06-10 11:48:15.077395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.053 [2024-06-10 11:48:15.077422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.053 [2024-06-10 11:48:15.085980] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.053 [2024-06-10 11:48:15.086426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.053 [2024-06-10 11:48:15.086453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.053 [2024-06-10 11:48:15.094480] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.053 [2024-06-10 11:48:15.094901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.053 [2024-06-10 11:48:15.094927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.053 [2024-06-10 11:48:15.103921] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.053 [2024-06-10 11:48:15.104361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.053 [2024-06-10 11:48:15.104388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.053 [2024-06-10 11:48:15.111493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.053 [2024-06-10 11:48:15.111926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.053 [2024-06-10 11:48:15.111952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.053 [2024-06-10 11:48:15.119686] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.053 [2024-06-10 11:48:15.119799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.053 [2024-06-10 11:48:15.119824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.053 [2024-06-10 11:48:15.132826] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.053 [2024-06-10 11:48:15.133295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.053 [2024-06-10 11:48:15.133321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.053 [2024-06-10 11:48:15.146051] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.053 [2024-06-10 11:48:15.146637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.053 [2024-06-10 11:48:15.146663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.053 [2024-06-10 11:48:15.155995] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.313 [2024-06-10 11:48:15.156414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.313 [2024-06-10 11:48:15.156440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.313 [2024-06-10 11:48:15.165187] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.313 [2024-06-10 11:48:15.165506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.313 [2024-06-10 11:48:15.165532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.313 [2024-06-10 11:48:15.173301] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.313 [2024-06-10 11:48:15.173659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.313 [2024-06-10 11:48:15.173695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.313 [2024-06-10 11:48:15.181979] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.313 [2024-06-10 11:48:15.182402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.313 [2024-06-10 11:48:15.182427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.313 [2024-06-10 11:48:15.190794] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.313 [2024-06-10 11:48:15.191168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.313 [2024-06-10 11:48:15.191194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.313 [2024-06-10 11:48:15.199148] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.313 [2024-06-10 11:48:15.199471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.313 [2024-06-10 11:48:15.199497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.313 [2024-06-10 11:48:15.207079] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.313 [2024-06-10 11:48:15.207515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.313 [2024-06-10 11:48:15.207541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.313 [2024-06-10 11:48:15.215081] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.313 [2024-06-10 11:48:15.215497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.313 [2024-06-10 11:48:15.215522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.224186] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.224687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.224712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.233160] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.233574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.233604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.241128] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.241585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.241611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.249455] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.249787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.249813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.257694] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.258156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.258182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.265242] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.265645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.265671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.278104] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.278596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.278621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.287796] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.288196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.288223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.295569] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.295903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.295932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.303280] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.303633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.303660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.311012] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.311323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.311350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.318989] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.319345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.319371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.326504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.326837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.326863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.334440] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.334803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.334830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.342476] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.342805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.342831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.350750] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.351139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.351165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.357620] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.357939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.357966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.364821] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.365212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.365237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.371703] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.372062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.372088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.379253] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.379651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.379677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.386115] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.386453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.386482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.393316] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.393641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.393667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.400425] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.400744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.400769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.407686] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.408013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.408039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.314 [2024-06-10 11:48:15.414752] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.314 [2024-06-10 11:48:15.415075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.314 [2024-06-10 11:48:15.415101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.574 [2024-06-10 11:48:15.422986] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.574 [2024-06-10 11:48:15.423363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.574 [2024-06-10 11:48:15.423389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.574 [2024-06-10 11:48:15.430278] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.574 [2024-06-10 11:48:15.430717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.574 [2024-06-10 11:48:15.430742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.574 [2024-06-10 11:48:15.438172] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.574 [2024-06-10 11:48:15.438503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.574 [2024-06-10 11:48:15.438529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.574 [2024-06-10 11:48:15.446622] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.574 [2024-06-10 11:48:15.446941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.574 [2024-06-10 11:48:15.446966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.574 [2024-06-10 11:48:15.454922] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.574 [2024-06-10 11:48:15.455236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.574 [2024-06-10 11:48:15.455261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.574 [2024-06-10 11:48:15.463044] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.574 [2024-06-10 11:48:15.463371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.574 [2024-06-10 11:48:15.463397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.574 [2024-06-10 11:48:15.472277] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.574 [2024-06-10 11:48:15.472699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.574 [2024-06-10 11:48:15.472725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.574 [2024-06-10 11:48:15.480370] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.574 [2024-06-10 11:48:15.480820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.574 [2024-06-10 11:48:15.480846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.574 [2024-06-10 11:48:15.487620] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.574 [2024-06-10 11:48:15.488027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.574 [2024-06-10 11:48:15.488052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.574 [2024-06-10 11:48:15.495559] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.574 [2024-06-10 11:48:15.496014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.574 [2024-06-10 11:48:15.496040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.574 [2024-06-10 11:48:15.504084] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.574 [2024-06-10 11:48:15.504490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.504516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.512654] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.513121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.513147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.521187] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.521591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.521625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.529422] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.529820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.529845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.537989] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.538400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.538426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.546407] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.546845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.546871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.554937] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.555380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.555406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.563701] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.564146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.564172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.572738] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.573191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.573216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.580990] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.581475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.581501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.589625] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.590033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.590058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.597968] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.598526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.598551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.606651] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.607067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.607092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.613988] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.614301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.614326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.621752] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.622132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.622157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.629133] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.629484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.629509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.637287] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.637654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.637680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.646377] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.646803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.646829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.654136] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.654491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.654517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.661731] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.662067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.662092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.668940] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.669315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.669340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.575 [2024-06-10 11:48:15.676340] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.575 [2024-06-10 11:48:15.676736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.575 [2024-06-10 11:48:15.676761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.684442] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.684875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.684901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.692675] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.692992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.693017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.700488] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.700934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.700960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.707873] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.708284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.708309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.715251] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.715617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.715643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.722004] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.722330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.722356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.729659] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.730007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.730036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.737510] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.737837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.737862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.744822] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.745143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.745167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.752474] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.752800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.752825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.760122] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.760435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.760460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.767794] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.768106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.768131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.775535] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.775874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.775899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.783227] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.783552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.783584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.790803] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.791128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.791153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.798244] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.798570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.798603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.805598] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.805922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.805947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.813757] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.814106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.814131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.821606] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.821942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.821967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.828979] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.829303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.829328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.836444] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.836769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.836794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.843677] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.844065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.844091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.850863] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.851189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.851215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.857995] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.858413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.858438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.865145] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.865498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.865523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.871606] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.836 [2024-06-10 11:48:15.871922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.836 [2024-06-10 11:48:15.871948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.836 [2024-06-10 11:48:15.878834] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.837 [2024-06-10 11:48:15.879243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.837 [2024-06-10 11:48:15.879269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.837 [2024-06-10 11:48:15.885923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.837 [2024-06-10 11:48:15.886343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.837 [2024-06-10 11:48:15.886368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.837 [2024-06-10 11:48:15.893729] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.837 [2024-06-10 11:48:15.894052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.837 [2024-06-10 11:48:15.894077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.837 [2024-06-10 11:48:15.900687] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.837 [2024-06-10 11:48:15.901073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.837 [2024-06-10 11:48:15.901098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.837 [2024-06-10 11:48:15.907728] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.837 [2024-06-10 11:48:15.908195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.837 [2024-06-10 11:48:15.908220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:50.837 [2024-06-10 11:48:15.915013] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.837 [2024-06-10 11:48:15.915339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.837 [2024-06-10 11:48:15.915365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:50.837 [2024-06-10 11:48:15.922770] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.837 [2024-06-10 11:48:15.923151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.837 [2024-06-10 11:48:15.923181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:50.837 [2024-06-10 11:48:15.930362] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.837 [2024-06-10 11:48:15.930715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.837 [2024-06-10 11:48:15.930741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:50.837 [2024-06-10 11:48:15.937881] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:50.837 [2024-06-10 11:48:15.938203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:50.837 [2024-06-10 11:48:15.938228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.095 [2024-06-10 11:48:15.944951] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:15.945269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:15.945294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:15.952154] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:15.952479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:15.952505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:15.959587] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:15.959915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:15.959941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:15.967002] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:15.967352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:15.967377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:15.974566] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:15.974904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:15.974929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:15.982267] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:15.982696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:15.982721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:15.990168] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:15.990511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:15.990536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:15.997563] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:15.997908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:15.997933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.005399] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.005724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.005749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.012902] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.013238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.013264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.020596] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.020926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.020951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.028679] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.029004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.029029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.036836] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.037202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.037226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.045315] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.045662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.045688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.052694] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.053011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.053038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.060224] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.060552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.060584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.068083] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.068408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.068434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.075539] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.075876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.075902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.082994] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.083320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.083345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.090884] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.091232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.091257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.098734] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.099059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.099085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.105946] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.106264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.106289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.113329] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.113633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.113659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.119624] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.119900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.119931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.125334] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.125691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.125716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.131980] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.132272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.132298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.138900] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.139235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.139260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.145961] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.146272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.146298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.152617] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.152941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.152967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.158823] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.159099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.159124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.164891] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.165200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.165225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.171713] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.172038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.172063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.177375] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.177652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.177677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.184558] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.184863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.184888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.096 [2024-06-10 11:48:16.191331] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.096 [2024-06-10 11:48:16.191706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.096 [2024-06-10 11:48:16.191732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.355 [2024-06-10 11:48:16.199845] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.355 [2024-06-10 11:48:16.200201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.355 [2024-06-10 11:48:16.200226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.355 [2024-06-10 11:48:16.207092] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.207448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.207474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.213921] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.214248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.214273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.221412] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.221702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.221727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.228603] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.228932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.228957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.235762] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.236112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.236144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.242911] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.243239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.243265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.249650] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.249979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.250004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.256539] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.256882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.256907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.264419] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.264693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.264719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.271272] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.271553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.271586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.278073] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.278355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.278380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.285237] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.285573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.285604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.292315] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.292634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.292659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.299322] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.299655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.299681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.306102] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.306378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.306406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.313060] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.313409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.313436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.319926] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.320234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.320261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.327013] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.327321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.327347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.333841] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.334207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.334232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.340822] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.341093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.341118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.348538] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.348938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.348963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.356687] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.357052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.357077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.365506] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.365886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.365912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.374351] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.374735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.374761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.382884] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.383289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.383314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.391563] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.391951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.356 [2024-06-10 11:48:16.391976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.356 [2024-06-10 11:48:16.400305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.356 [2024-06-10 11:48:16.400704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.357 [2024-06-10 11:48:16.400730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.357 [2024-06-10 11:48:16.409375] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.357 [2024-06-10 11:48:16.409750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.357 [2024-06-10 11:48:16.409775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.357 [2024-06-10 11:48:16.418010] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.357 [2024-06-10 11:48:16.418423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.357 [2024-06-10 11:48:16.418449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.357 [2024-06-10 11:48:16.426332] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.357 [2024-06-10 11:48:16.426669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.357 [2024-06-10 11:48:16.426694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.357 [2024-06-10 11:48:16.435204] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.357 [2024-06-10 11:48:16.435541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.357 [2024-06-10 11:48:16.435572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.357 [2024-06-10 11:48:16.442952] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.357 [2024-06-10 11:48:16.443266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.357 [2024-06-10 11:48:16.443292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.357 [2024-06-10 11:48:16.451148] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.357 [2024-06-10 11:48:16.451512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.357 [2024-06-10 11:48:16.451537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.460161] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.460509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.460534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.468765] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.469157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.469182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.477410] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.477873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.477900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.485920] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.486271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.486296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.493691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.494031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.494055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.501885] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.502201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.502227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.509563] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.509887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.509913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.517452] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.517722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.517747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.524170] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.524528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.524554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.532215] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.532531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.532556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.539991] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.540331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.540356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.547766] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.548142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.548168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.555668] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.555977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.556003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.563018] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.563343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.563369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.570802] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.617 [2024-06-10 11:48:16.571154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.617 [2024-06-10 11:48:16.571179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.617 [2024-06-10 11:48:16.578371] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.578662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.578687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.586109] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.586400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.586426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.593682] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.593982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.594006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.601403] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.601688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.601713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.608997] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.609366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.609391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.616637] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.616862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.616887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.625279] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.625541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.625566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.633773] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.634163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.634188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.641396] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.641698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.641729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.648733] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.648959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.648985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.655850] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.656129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.656154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.662801] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.663069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.663094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.669475] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.669748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.669773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.676408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.676634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.676660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.683441] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.683732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.683758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.689701] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.689981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.690006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.696160] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.696418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.696443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.702076] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.702318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.702343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.708556] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.708842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.708868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.618 [2024-06-10 11:48:16.715839] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.618 [2024-06-10 11:48:16.716092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.618 [2024-06-10 11:48:16.716117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.722997] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.723249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.723274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.730504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.730786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.730811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.737552] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.737829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.737855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.745221] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.745535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.745560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.752851] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.753122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.753147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.759918] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.760228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.760257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.767937] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.768161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.768186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.775527] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.775774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.775799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.781916] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.782210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.782235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.788348] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.788670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.788695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.794985] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.795332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.795357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.801464] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.801712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.801740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.808278] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.808538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.808563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.815121] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.815339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.815366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.822318] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.822616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.822642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.829338] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.829589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.829614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.836363] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.836613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.836638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.843607] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.843878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.843903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.849183] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.849520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.849546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.855492] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.855754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.855779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.861666] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.861983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.862008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.868516] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.868745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.868770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.875830] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.876051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.876076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.882986] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.883263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.883288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.889874] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.890103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.890129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.896780] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.879 [2024-06-10 11:48:16.897058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.879 [2024-06-10 11:48:16.897083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.879 [2024-06-10 11:48:16.903883] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.880 [2024-06-10 11:48:16.904114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.880 [2024-06-10 11:48:16.904140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.880 [2024-06-10 11:48:16.910740] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.880 [2024-06-10 11:48:16.911001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.880 [2024-06-10 11:48:16.911027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.880 [2024-06-10 11:48:16.917592] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.880 [2024-06-10 11:48:16.917892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.880 [2024-06-10 11:48:16.917917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.880 [2024-06-10 11:48:16.924749] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.880 [2024-06-10 11:48:16.925037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.880 [2024-06-10 11:48:16.925062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.880 [2024-06-10 11:48:16.931638] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.880 [2024-06-10 11:48:16.931919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.880 [2024-06-10 11:48:16.931944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.880 [2024-06-10 11:48:16.938978] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.880 [2024-06-10 11:48:16.939213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.880 [2024-06-10 11:48:16.939242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.880 [2024-06-10 11:48:16.945776] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.880 [2024-06-10 11:48:16.946003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.880 [2024-06-10 11:48:16.946028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.880 [2024-06-10 11:48:16.952709] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.880 [2024-06-10 11:48:16.952962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.880 [2024-06-10 11:48:16.952987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:51.880 [2024-06-10 11:48:16.959369] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.880 [2024-06-10 11:48:16.959702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.880 [2024-06-10 11:48:16.959728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:51.880 [2024-06-10 11:48:16.967009] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.880 [2024-06-10 11:48:16.967318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.880 [2024-06-10 11:48:16.967344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:51.880 [2024-06-10 11:48:16.974100] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.880 [2024-06-10 11:48:16.974338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.880 [2024-06-10 11:48:16.974364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:51.880 [2024-06-10 11:48:16.980994] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:51.880 [2024-06-10 11:48:16.981233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:51.880 [2024-06-10 11:48:16.981258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:52.140 [2024-06-10 11:48:16.988253] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:52.140 [2024-06-10 11:48:16.988512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.140 [2024-06-10 11:48:16.988538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:52.140 [2024-06-10 11:48:16.995246] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:52.140 [2024-06-10 11:48:16.995525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.140 [2024-06-10 11:48:16.995551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:52.140 [2024-06-10 11:48:17.001850] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:52.140 [2024-06-10 11:48:17.002101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.140 [2024-06-10 11:48:17.002127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:52.140 [2024-06-10 11:48:17.009119] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:52.140 [2024-06-10 11:48:17.009360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.140 [2024-06-10 11:48:17.009386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:52.140 [2024-06-10 11:48:17.016167] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:52.140 [2024-06-10 11:48:17.016403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.140 [2024-06-10 11:48:17.016429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:52.140 [2024-06-10 11:48:17.023047] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2219aa0) with pdu=0x2000190fef90 00:39:52.140 [2024-06-10 11:48:17.023295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:52.140 [2024-06-10 11:48:17.023320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:52.140 00:39:52.140 Latency(us) 00:39:52.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:52.140 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:39:52.140 nvme0n1 : 2.00 4032.52 504.07 0.00 0.00 3960.68 2621.44 15204.35 00:39:52.140 =================================================================================================================== 00:39:52.140 Total : 4032.52 504.07 0.00 0.00 3960.68 2621.44 15204.35 00:39:52.140 0 00:39:52.140 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:52.140 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:52.140 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:52.141 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:52.141 | .driver_specific 00:39:52.141 | .nvme_error 00:39:52.141 | .status_code 00:39:52.141 | .command_transient_transport_error' 00:39:52.400 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 260 > 0 )) 00:39:52.400 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4163924 00:39:52.400 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 4163924 ']' 00:39:52.400 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 4163924 00:39:52.400 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:39:52.400 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:52.400 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4163924 00:39:52.400 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:39:52.400 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:39:52.400 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4163924' 00:39:52.400 killing process with pid 4163924 00:39:52.400 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 4163924 00:39:52.400 Received shutdown signal, test time was about 2.000000 seconds 00:39:52.400 00:39:52.400 Latency(us) 00:39:52.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:52.400 =================================================================================================================== 00:39:52.400 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:52.400 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 4163924 00:39:52.659 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4160956 00:39:52.659 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 4160956 ']' 00:39:52.659 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 4160956 00:39:52.659 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:39:52.659 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:52.659 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4160956 00:39:52.659 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:52.659 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:52.659 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4160956' 00:39:52.659 killing process with pid 4160956 00:39:52.659 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 4160956 00:39:52.659 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 4160956 00:39:52.918 00:39:52.918 real 0m18.415s 00:39:52.918 user 0m35.666s 00:39:52.918 sys 0m5.223s 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:52.918 ************************************ 00:39:52.918 END TEST nvmf_digest_error 00:39:52.918 ************************************ 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:52.918 rmmod nvme_tcp 00:39:52.918 rmmod nvme_fabrics 00:39:52.918 rmmod nvme_keyring 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 4160956 ']' 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 4160956 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 4160956 ']' 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 4160956 00:39:52.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (4160956) - No such process 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 4160956 is not found' 00:39:52.918 Process with pid 4160956 is not found 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:52.918 11:48:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:54.913 11:48:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:54.913 00:39:54.913 real 0m47.446s 00:39:54.913 user 1m12.798s 00:39:54.913 sys 0m16.948s 00:39:54.913 11:48:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:54.913 11:48:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:54.913 ************************************ 00:39:54.913 END TEST nvmf_digest 00:39:54.913 ************************************ 00:39:55.173 11:48:20 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:39:55.173 11:48:20 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:39:55.173 11:48:20 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:39:55.173 11:48:20 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:39:55.173 11:48:20 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:39:55.173 11:48:20 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:55.173 11:48:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:55.173 ************************************ 00:39:55.173 START TEST nvmf_bdevperf 00:39:55.173 ************************************ 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:39:55.173 * Looking for test storage... 00:39:55.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.173 11:48:20 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:39:55.174 11:48:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:05.159 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:05.159 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:40:05.159 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:05.159 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:05.159 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:05.159 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:05.159 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:05.159 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:05.160 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:05.160 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:05.160 Found net devices under 0000:af:00.0: cvl_0_0 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:05.160 Found net devices under 0000:af:00.1: cvl_0_1 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:05.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:05.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:40:05.160 00:40:05.160 --- 10.0.0.2 ping statistics --- 00:40:05.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.160 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:05.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:05.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:40:05.160 00:40:05.160 --- 10.0.0.1 ping statistics --- 00:40:05.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.160 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=4168942 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 4168942 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 4168942 ']' 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:05.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:05.160 11:48:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:05.160 [2024-06-10 11:48:28.941962] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:40:05.161 [2024-06-10 11:48:28.942025] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:05.161 EAL: No free 2048 kB hugepages reported on node 1 00:40:05.161 [2024-06-10 11:48:29.059522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:05.161 [2024-06-10 11:48:29.145974] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:05.161 [2024-06-10 11:48:29.146016] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:05.161 [2024-06-10 11:48:29.146029] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:05.161 [2024-06-10 11:48:29.146041] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:05.161 [2024-06-10 11:48:29.146051] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:05.161 [2024-06-10 11:48:29.146157] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:40:05.161 [2024-06-10 11:48:29.146277] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:40:05.161 [2024-06-10 11:48:29.146278] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:05.161 [2024-06-10 11:48:29.899466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:05.161 Malloc0 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:05.161 [2024-06-10 11:48:29.973675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:05.161 { 00:40:05.161 "params": { 00:40:05.161 "name": "Nvme$subsystem", 00:40:05.161 "trtype": "$TEST_TRANSPORT", 00:40:05.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:05.161 "adrfam": "ipv4", 00:40:05.161 "trsvcid": "$NVMF_PORT", 00:40:05.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:05.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:05.161 "hdgst": ${hdgst:-false}, 00:40:05.161 "ddgst": ${ddgst:-false} 00:40:05.161 }, 00:40:05.161 "method": "bdev_nvme_attach_controller" 00:40:05.161 } 00:40:05.161 EOF 00:40:05.161 )") 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:40:05.161 11:48:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:05.161 "params": { 00:40:05.161 "name": "Nvme1", 00:40:05.161 "trtype": "tcp", 00:40:05.161 "traddr": "10.0.0.2", 00:40:05.161 "adrfam": "ipv4", 00:40:05.161 "trsvcid": "4420", 00:40:05.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:05.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:05.161 "hdgst": false, 00:40:05.161 "ddgst": false 00:40:05.161 }, 00:40:05.161 "method": "bdev_nvme_attach_controller" 00:40:05.161 }' 00:40:05.161 [2024-06-10 11:48:30.032944] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:40:05.161 [2024-06-10 11:48:30.033005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4169207 ] 00:40:05.161 EAL: No free 2048 kB hugepages reported on node 1 00:40:05.161 [2024-06-10 11:48:30.154171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:05.161 [2024-06-10 11:48:30.235552] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:05.420 Running I/O for 1 seconds... 00:40:06.357 00:40:06.357 Latency(us) 00:40:06.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:06.357 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:06.357 Verification LBA range: start 0x0 length 0x4000 00:40:06.357 Nvme1n1 : 1.02 8380.66 32.74 0.00 0.00 15205.57 3185.05 17196.65 00:40:06.357 =================================================================================================================== 00:40:06.357 Total : 8380.66 32.74 0.00 0.00 15205.57 3185.05 17196.65 00:40:06.615 11:48:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4169475 00:40:06.615 11:48:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:40:06.615 11:48:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:40:06.615 11:48:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:40:06.615 11:48:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:40:06.615 11:48:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:40:06.615 11:48:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:06.615 11:48:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:06.615 { 00:40:06.615 "params": { 00:40:06.615 "name": "Nvme$subsystem", 00:40:06.615 "trtype": "$TEST_TRANSPORT", 00:40:06.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:06.615 "adrfam": "ipv4", 00:40:06.615 "trsvcid": "$NVMF_PORT", 00:40:06.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:06.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:06.615 "hdgst": ${hdgst:-false}, 00:40:06.615 "ddgst": ${ddgst:-false} 00:40:06.615 }, 00:40:06.615 "method": "bdev_nvme_attach_controller" 00:40:06.615 } 00:40:06.615 EOF 00:40:06.615 )") 00:40:06.615 11:48:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:40:06.615 11:48:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:40:06.615 11:48:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:40:06.615 11:48:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:06.615 "params": { 00:40:06.615 "name": "Nvme1", 00:40:06.615 "trtype": "tcp", 00:40:06.615 "traddr": "10.0.0.2", 00:40:06.616 "adrfam": "ipv4", 00:40:06.616 "trsvcid": "4420", 00:40:06.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:06.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:06.616 "hdgst": false, 00:40:06.616 "ddgst": false 00:40:06.616 }, 00:40:06.616 "method": "bdev_nvme_attach_controller" 00:40:06.616 }' 00:40:06.616 [2024-06-10 11:48:31.659709] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:40:06.616 [2024-06-10 11:48:31.659775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4169475 ] 00:40:06.616 EAL: No free 2048 kB hugepages reported on node 1 00:40:06.874 [2024-06-10 11:48:31.779358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.874 [2024-06-10 11:48:31.857322] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:07.132 Running I/O for 15 seconds... 00:40:09.667 11:48:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4168942 00:40:09.667 11:48:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:40:09.667 [2024-06-10 11:48:34.629487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.629536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.629582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.629613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.629643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.629672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.629702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.629732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.629762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.629791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.629819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.629849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.629880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.629916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.629949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.629981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.629998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.630098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.630125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.630151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.630179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.630206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.630233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.667 [2024-06-10 11:48:34.630479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.667 [2024-06-10 11:48:34.630590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.667 [2024-06-10 11:48:34.630605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.630974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.630989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.668 [2024-06-10 11:48:34.631663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.668 [2024-06-10 11:48:34.631678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.631691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.631705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.631717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.631732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.631745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.631759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:09.669 [2024-06-10 11:48:34.631772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.631786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.631798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.631814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.631827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.631841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.631854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.631869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.631881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.631896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.631908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.631923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.631936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.631950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.631963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.631978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.631990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.669 [2024-06-10 11:48:34.632742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.669 [2024-06-10 11:48:34.632755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.670 [2024-06-10 11:48:34.632769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.670 [2024-06-10 11:48:34.632782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.670 [2024-06-10 11:48:34.632797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.670 [2024-06-10 11:48:34.632809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.670 [2024-06-10 11:48:34.632824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.670 [2024-06-10 11:48:34.632837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.670 [2024-06-10 11:48:34.632851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.670 [2024-06-10 11:48:34.632864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.670 [2024-06-10 11:48:34.632879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.670 [2024-06-10 11:48:34.632891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.670 [2024-06-10 11:48:34.632905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.670 [2024-06-10 11:48:34.632918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.670 [2024-06-10 11:48:34.632932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.670 [2024-06-10 11:48:34.632945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.670 [2024-06-10 11:48:34.632959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.670 [2024-06-10 11:48:34.632971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.670 [2024-06-10 11:48:34.632986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.670 [2024-06-10 11:48:34.632998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.670 [2024-06-10 11:48:34.633013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.670 [2024-06-10 11:48:34.633025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.670 [2024-06-10 11:48:34.633041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:09.670 [2024-06-10 11:48:34.633054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.670 [2024-06-10 11:48:34.633067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c8570 is same with the state(5) to be set 00:40:09.670 [2024-06-10 11:48:34.633081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:09.670 [2024-06-10 11:48:34.633091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:09.670 [2024-06-10 11:48:34.633103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41896 len:8 PRP1 0x0 PRP2 0x0 00:40:09.670 [2024-06-10 11:48:34.633115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:09.670 [2024-06-10 11:48:34.633166] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20c8570 was disconnected and freed. reset controller. 00:40:09.670 [2024-06-10 11:48:34.636928] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.670 [2024-06-10 11:48:34.636990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.670 [2024-06-10 11:48:34.637841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.670 [2024-06-10 11:48:34.637865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.670 [2024-06-10 11:48:34.637879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.670 [2024-06-10 11:48:34.638118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.670 [2024-06-10 11:48:34.638354] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.670 [2024-06-10 11:48:34.638367] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.670 [2024-06-10 11:48:34.638381] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.670 [2024-06-10 11:48:34.642121] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.670 [2024-06-10 11:48:34.651376] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.670 [2024-06-10 11:48:34.651978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.670 [2024-06-10 11:48:34.652031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.670 [2024-06-10 11:48:34.652064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.670 [2024-06-10 11:48:34.652671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.670 [2024-06-10 11:48:34.653169] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.670 [2024-06-10 11:48:34.653183] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.670 [2024-06-10 11:48:34.653195] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.670 [2024-06-10 11:48:34.656935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.670 [2024-06-10 11:48:34.665517] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.670 [2024-06-10 11:48:34.666108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.670 [2024-06-10 11:48:34.666132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.670 [2024-06-10 11:48:34.666149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.670 [2024-06-10 11:48:34.666386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.670 [2024-06-10 11:48:34.666631] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.670 [2024-06-10 11:48:34.666646] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.670 [2024-06-10 11:48:34.666659] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.670 [2024-06-10 11:48:34.670395] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.670 [2024-06-10 11:48:34.679654] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.670 [2024-06-10 11:48:34.680187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.670 [2024-06-10 11:48:34.680239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.670 [2024-06-10 11:48:34.680271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.670 [2024-06-10 11:48:34.680876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.670 [2024-06-10 11:48:34.681352] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.670 [2024-06-10 11:48:34.681366] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.670 [2024-06-10 11:48:34.681379] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.670 [2024-06-10 11:48:34.685119] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.670 [2024-06-10 11:48:34.693716] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.670 [2024-06-10 11:48:34.694219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.670 [2024-06-10 11:48:34.694270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.670 [2024-06-10 11:48:34.694302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.670 [2024-06-10 11:48:34.694904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.670 [2024-06-10 11:48:34.695378] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.670 [2024-06-10 11:48:34.695391] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.670 [2024-06-10 11:48:34.695404] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.670 [2024-06-10 11:48:34.699135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.670 [2024-06-10 11:48:34.707720] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.670 [2024-06-10 11:48:34.708346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.670 [2024-06-10 11:48:34.708397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.670 [2024-06-10 11:48:34.708429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.670 [2024-06-10 11:48:34.708927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.670 [2024-06-10 11:48:34.709169] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.670 [2024-06-10 11:48:34.709184] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.670 [2024-06-10 11:48:34.709196] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.670 [2024-06-10 11:48:34.712923] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.670 [2024-06-10 11:48:34.721725] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.670 [2024-06-10 11:48:34.722313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.670 [2024-06-10 11:48:34.722365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.670 [2024-06-10 11:48:34.722397] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.670 [2024-06-10 11:48:34.722969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.671 [2024-06-10 11:48:34.723207] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.671 [2024-06-10 11:48:34.723221] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.671 [2024-06-10 11:48:34.723233] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.671 [2024-06-10 11:48:34.726959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.671 [2024-06-10 11:48:34.735760] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.671 [2024-06-10 11:48:34.736344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.671 [2024-06-10 11:48:34.736395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.671 [2024-06-10 11:48:34.736427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.671 [2024-06-10 11:48:34.737032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.671 [2024-06-10 11:48:34.737393] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.671 [2024-06-10 11:48:34.737407] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.671 [2024-06-10 11:48:34.737419] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.671 [2024-06-10 11:48:34.741145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.671 [2024-06-10 11:48:34.749937] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.671 [2024-06-10 11:48:34.750525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.671 [2024-06-10 11:48:34.750592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.671 [2024-06-10 11:48:34.750626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.671 [2024-06-10 11:48:34.751126] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.671 [2024-06-10 11:48:34.751363] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.671 [2024-06-10 11:48:34.751377] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.671 [2024-06-10 11:48:34.751389] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.671 [2024-06-10 11:48:34.755111] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.671 [2024-06-10 11:48:34.764138] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.671 [2024-06-10 11:48:34.764724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.671 [2024-06-10 11:48:34.764747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.671 [2024-06-10 11:48:34.764760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.671 [2024-06-10 11:48:34.764996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.671 [2024-06-10 11:48:34.765233] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.671 [2024-06-10 11:48:34.765247] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.671 [2024-06-10 11:48:34.765260] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.931 [2024-06-10 11:48:34.769008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.931 [2024-06-10 11:48:34.778251] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.931 [2024-06-10 11:48:34.778832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.931 [2024-06-10 11:48:34.778855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.931 [2024-06-10 11:48:34.778868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.931 [2024-06-10 11:48:34.779104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.931 [2024-06-10 11:48:34.779341] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.931 [2024-06-10 11:48:34.779355] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.931 [2024-06-10 11:48:34.779368] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.931 [2024-06-10 11:48:34.783102] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.931 [2024-06-10 11:48:34.792333] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.931 [2024-06-10 11:48:34.792925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.931 [2024-06-10 11:48:34.792977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.931 [2024-06-10 11:48:34.793009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.931 [2024-06-10 11:48:34.793423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.931 [2024-06-10 11:48:34.793667] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.931 [2024-06-10 11:48:34.793681] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.932 [2024-06-10 11:48:34.793694] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.932 [2024-06-10 11:48:34.797414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.932 [2024-06-10 11:48:34.806424] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.932 [2024-06-10 11:48:34.807021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.932 [2024-06-10 11:48:34.807071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.932 [2024-06-10 11:48:34.807111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.932 [2024-06-10 11:48:34.807642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.932 [2024-06-10 11:48:34.807879] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.932 [2024-06-10 11:48:34.807893] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.932 [2024-06-10 11:48:34.807906] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.932 [2024-06-10 11:48:34.811634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.932 [2024-06-10 11:48:34.820423] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.932 [2024-06-10 11:48:34.821022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.932 [2024-06-10 11:48:34.821074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.932 [2024-06-10 11:48:34.821106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.932 [2024-06-10 11:48:34.821544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.932 [2024-06-10 11:48:34.821789] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.932 [2024-06-10 11:48:34.821803] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.932 [2024-06-10 11:48:34.821816] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.932 [2024-06-10 11:48:34.825544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.932 [2024-06-10 11:48:34.834562] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.932 [2024-06-10 11:48:34.835159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.932 [2024-06-10 11:48:34.835210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.932 [2024-06-10 11:48:34.835243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.932 [2024-06-10 11:48:34.835849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.932 [2024-06-10 11:48:34.836400] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.932 [2024-06-10 11:48:34.836414] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.932 [2024-06-10 11:48:34.836427] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.932 [2024-06-10 11:48:34.840155] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.932 [2024-06-10 11:48:34.848736] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.932 [2024-06-10 11:48:34.849322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.932 [2024-06-10 11:48:34.849373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.932 [2024-06-10 11:48:34.849405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.932 [2024-06-10 11:48:34.850009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.932 [2024-06-10 11:48:34.850516] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.932 [2024-06-10 11:48:34.850534] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.932 [2024-06-10 11:48:34.850546] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.932 [2024-06-10 11:48:34.854277] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.932 [2024-06-10 11:48:34.862862] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.932 [2024-06-10 11:48:34.863419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.932 [2024-06-10 11:48:34.863441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.932 [2024-06-10 11:48:34.863454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.932 [2024-06-10 11:48:34.863697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.932 [2024-06-10 11:48:34.863934] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.932 [2024-06-10 11:48:34.863948] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.932 [2024-06-10 11:48:34.863961] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.932 [2024-06-10 11:48:34.867695] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.932 [2024-06-10 11:48:34.876936] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.932 [2024-06-10 11:48:34.877522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.932 [2024-06-10 11:48:34.877544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.932 [2024-06-10 11:48:34.877557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.932 [2024-06-10 11:48:34.877801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.932 [2024-06-10 11:48:34.878039] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.932 [2024-06-10 11:48:34.878053] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.932 [2024-06-10 11:48:34.878065] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.932 [2024-06-10 11:48:34.881791] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.932 [2024-06-10 11:48:34.891020] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.932 [2024-06-10 11:48:34.891603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.932 [2024-06-10 11:48:34.891626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.932 [2024-06-10 11:48:34.891639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.932 [2024-06-10 11:48:34.891875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.932 [2024-06-10 11:48:34.892112] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.932 [2024-06-10 11:48:34.892126] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.932 [2024-06-10 11:48:34.892138] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.932 [2024-06-10 11:48:34.895875] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.932 [2024-06-10 11:48:34.905116] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.932 [2024-06-10 11:48:34.905699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.932 [2024-06-10 11:48:34.905721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.932 [2024-06-10 11:48:34.905734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.932 [2024-06-10 11:48:34.905971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.932 [2024-06-10 11:48:34.906208] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.932 [2024-06-10 11:48:34.906222] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.932 [2024-06-10 11:48:34.906234] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.932 [2024-06-10 11:48:34.909966] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.932 [2024-06-10 11:48:34.919216] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.932 [2024-06-10 11:48:34.919803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.932 [2024-06-10 11:48:34.919826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.932 [2024-06-10 11:48:34.919839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.932 [2024-06-10 11:48:34.920075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.932 [2024-06-10 11:48:34.920312] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.932 [2024-06-10 11:48:34.920326] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.932 [2024-06-10 11:48:34.920339] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.932 [2024-06-10 11:48:34.924071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.932 [2024-06-10 11:48:34.933320] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.932 [2024-06-10 11:48:34.933884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.933 [2024-06-10 11:48:34.933906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.933 [2024-06-10 11:48:34.933919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.933 [2024-06-10 11:48:34.934155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.933 [2024-06-10 11:48:34.934394] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.933 [2024-06-10 11:48:34.934409] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.933 [2024-06-10 11:48:34.934421] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.933 [2024-06-10 11:48:34.938149] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.933 [2024-06-10 11:48:34.947397] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.933 [2024-06-10 11:48:34.947983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.933 [2024-06-10 11:48:34.948034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.933 [2024-06-10 11:48:34.948066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.933 [2024-06-10 11:48:34.948525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.933 [2024-06-10 11:48:34.948768] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.933 [2024-06-10 11:48:34.948783] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.933 [2024-06-10 11:48:34.948796] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.933 [2024-06-10 11:48:34.952521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.933 [2024-06-10 11:48:34.961584] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.933 [2024-06-10 11:48:34.962175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.933 [2024-06-10 11:48:34.962226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.933 [2024-06-10 11:48:34.962260] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.933 [2024-06-10 11:48:34.962732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.933 [2024-06-10 11:48:34.962970] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.933 [2024-06-10 11:48:34.962985] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.933 [2024-06-10 11:48:34.962997] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.933 [2024-06-10 11:48:34.966732] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.933 [2024-06-10 11:48:34.975768] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.933 [2024-06-10 11:48:34.976270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.933 [2024-06-10 11:48:34.976322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.933 [2024-06-10 11:48:34.976354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.933 [2024-06-10 11:48:34.976822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.933 [2024-06-10 11:48:34.977060] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.933 [2024-06-10 11:48:34.977074] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.933 [2024-06-10 11:48:34.977087] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.933 [2024-06-10 11:48:34.980816] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.933 [2024-06-10 11:48:34.989828] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.933 [2024-06-10 11:48:34.990424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.933 [2024-06-10 11:48:34.990476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.933 [2024-06-10 11:48:34.990508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.933 [2024-06-10 11:48:34.990956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.933 [2024-06-10 11:48:34.991195] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.933 [2024-06-10 11:48:34.991209] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.933 [2024-06-10 11:48:34.991226] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.933 [2024-06-10 11:48:34.994969] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.933 [2024-06-10 11:48:35.003994] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.933 [2024-06-10 11:48:35.004568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.933 [2024-06-10 11:48:35.004633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.933 [2024-06-10 11:48:35.004665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.933 [2024-06-10 11:48:35.005157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.933 [2024-06-10 11:48:35.005395] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.933 [2024-06-10 11:48:35.005409] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.933 [2024-06-10 11:48:35.005421] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.933 [2024-06-10 11:48:35.009154] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.933 [2024-06-10 11:48:35.018189] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.933 [2024-06-10 11:48:35.018749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.933 [2024-06-10 11:48:35.018771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.933 [2024-06-10 11:48:35.018785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.933 [2024-06-10 11:48:35.019020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.933 [2024-06-10 11:48:35.019257] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.933 [2024-06-10 11:48:35.019271] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.933 [2024-06-10 11:48:35.019284] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:09.933 [2024-06-10 11:48:35.023014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:09.933 [2024-06-10 11:48:35.032255] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:09.933 [2024-06-10 11:48:35.032798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:09.933 [2024-06-10 11:48:35.032850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:09.933 [2024-06-10 11:48:35.032882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:09.933 [2024-06-10 11:48:35.033470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:09.933 [2024-06-10 11:48:35.033901] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:09.933 [2024-06-10 11:48:35.033925] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:09.933 [2024-06-10 11:48:35.033946] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.193 [2024-06-10 11:48:35.040184] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.193 [2024-06-10 11:48:35.047222] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.193 [2024-06-10 11:48:35.047742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.193 [2024-06-10 11:48:35.047770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.193 [2024-06-10 11:48:35.047785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.193 [2024-06-10 11:48:35.048042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.193 [2024-06-10 11:48:35.048302] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.193 [2024-06-10 11:48:35.048318] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.193 [2024-06-10 11:48:35.048331] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.193 [2024-06-10 11:48:35.052393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.193 [2024-06-10 11:48:35.061337] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.193 [2024-06-10 11:48:35.061743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.193 [2024-06-10 11:48:35.061766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.193 [2024-06-10 11:48:35.061779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.193 [2024-06-10 11:48:35.062014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.193 [2024-06-10 11:48:35.062251] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.193 [2024-06-10 11:48:35.062265] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.193 [2024-06-10 11:48:35.062277] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.193 [2024-06-10 11:48:35.066007] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.193 [2024-06-10 11:48:35.075487] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.193 [2024-06-10 11:48:35.076050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.193 [2024-06-10 11:48:35.076072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.193 [2024-06-10 11:48:35.076085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.193 [2024-06-10 11:48:35.076320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.193 [2024-06-10 11:48:35.076556] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.193 [2024-06-10 11:48:35.076570] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.193 [2024-06-10 11:48:35.076588] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.194 [2024-06-10 11:48:35.080312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.194 [2024-06-10 11:48:35.089559] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.194 [2024-06-10 11:48:35.090158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.194 [2024-06-10 11:48:35.090181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.194 [2024-06-10 11:48:35.090194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.194 [2024-06-10 11:48:35.090430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.194 [2024-06-10 11:48:35.090681] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.194 [2024-06-10 11:48:35.090696] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.194 [2024-06-10 11:48:35.090708] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.194 [2024-06-10 11:48:35.094438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.194 [2024-06-10 11:48:35.103682] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.194 [2024-06-10 11:48:35.104273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.194 [2024-06-10 11:48:35.104324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.194 [2024-06-10 11:48:35.104356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.194 [2024-06-10 11:48:35.104873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.194 [2024-06-10 11:48:35.105111] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.194 [2024-06-10 11:48:35.105125] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.194 [2024-06-10 11:48:35.105137] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.194 [2024-06-10 11:48:35.108869] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.194 [2024-06-10 11:48:35.117892] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.194 [2024-06-10 11:48:35.118458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.194 [2024-06-10 11:48:35.118510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.194 [2024-06-10 11:48:35.118541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.194 [2024-06-10 11:48:35.119000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.194 [2024-06-10 11:48:35.119237] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.194 [2024-06-10 11:48:35.119251] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.194 [2024-06-10 11:48:35.119264] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.194 [2024-06-10 11:48:35.122990] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.194 [2024-06-10 11:48:35.132002] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.194 [2024-06-10 11:48:35.132582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.194 [2024-06-10 11:48:35.132605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.194 [2024-06-10 11:48:35.132618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.194 [2024-06-10 11:48:35.132854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.194 [2024-06-10 11:48:35.133090] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.194 [2024-06-10 11:48:35.133104] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.194 [2024-06-10 11:48:35.133117] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.194 [2024-06-10 11:48:35.136854] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.194 [2024-06-10 11:48:35.146090] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.194 [2024-06-10 11:48:35.146632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.194 [2024-06-10 11:48:35.146683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.194 [2024-06-10 11:48:35.146716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.194 [2024-06-10 11:48:35.147211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.194 [2024-06-10 11:48:35.147449] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.194 [2024-06-10 11:48:35.147463] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.194 [2024-06-10 11:48:35.147476] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.194 [2024-06-10 11:48:35.151205] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.194 [2024-06-10 11:48:35.160230] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.194 [2024-06-10 11:48:35.160800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.194 [2024-06-10 11:48:35.160823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.194 [2024-06-10 11:48:35.160837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.194 [2024-06-10 11:48:35.161073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.194 [2024-06-10 11:48:35.161310] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.194 [2024-06-10 11:48:35.161324] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.194 [2024-06-10 11:48:35.161336] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.194 [2024-06-10 11:48:35.165068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.194 [2024-06-10 11:48:35.174320] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.194 [2024-06-10 11:48:35.174908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.194 [2024-06-10 11:48:35.174931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.194 [2024-06-10 11:48:35.174944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.194 [2024-06-10 11:48:35.175179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.194 [2024-06-10 11:48:35.175416] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.194 [2024-06-10 11:48:35.175430] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.194 [2024-06-10 11:48:35.175442] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.194 [2024-06-10 11:48:35.179171] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.194 [2024-06-10 11:48:35.188415] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.194 [2024-06-10 11:48:35.189000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.194 [2024-06-10 11:48:35.189051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.194 [2024-06-10 11:48:35.189091] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.194 [2024-06-10 11:48:35.189586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.194 [2024-06-10 11:48:35.189824] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.194 [2024-06-10 11:48:35.189838] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.194 [2024-06-10 11:48:35.189850] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.194 [2024-06-10 11:48:35.193581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.194 [2024-06-10 11:48:35.202600] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.194 [2024-06-10 11:48:35.203189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.194 [2024-06-10 11:48:35.203239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.194 [2024-06-10 11:48:35.203271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.194 [2024-06-10 11:48:35.203876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.194 [2024-06-10 11:48:35.204314] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.194 [2024-06-10 11:48:35.204328] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.194 [2024-06-10 11:48:35.204340] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.194 [2024-06-10 11:48:35.208064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.194 [2024-06-10 11:48:35.216646] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.194 [2024-06-10 11:48:35.217136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.194 [2024-06-10 11:48:35.217186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.194 [2024-06-10 11:48:35.217218] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.195 [2024-06-10 11:48:35.217691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.195 [2024-06-10 11:48:35.217928] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.195 [2024-06-10 11:48:35.217941] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.195 [2024-06-10 11:48:35.217954] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.195 [2024-06-10 11:48:35.221676] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.195 [2024-06-10 11:48:35.230689] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.195 [2024-06-10 11:48:35.231261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.195 [2024-06-10 11:48:35.231283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.195 [2024-06-10 11:48:35.231297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.195 [2024-06-10 11:48:35.231532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.195 [2024-06-10 11:48:35.231777] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.195 [2024-06-10 11:48:35.231796] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.195 [2024-06-10 11:48:35.231808] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.195 [2024-06-10 11:48:35.235535] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.195 [2024-06-10 11:48:35.244781] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.195 [2024-06-10 11:48:35.245339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.195 [2024-06-10 11:48:35.245361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.195 [2024-06-10 11:48:35.245374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.195 [2024-06-10 11:48:35.245616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.195 [2024-06-10 11:48:35.245854] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.195 [2024-06-10 11:48:35.245868] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.195 [2024-06-10 11:48:35.245880] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.195 [2024-06-10 11:48:35.249602] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.195 [2024-06-10 11:48:35.258822] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.195 [2024-06-10 11:48:35.259408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.195 [2024-06-10 11:48:35.259459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.195 [2024-06-10 11:48:35.259490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.195 [2024-06-10 11:48:35.260092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.195 [2024-06-10 11:48:35.260610] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.195 [2024-06-10 11:48:35.260625] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.195 [2024-06-10 11:48:35.260637] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.195 [2024-06-10 11:48:35.264363] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.195 [2024-06-10 11:48:35.272942] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.195 [2024-06-10 11:48:35.273517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.195 [2024-06-10 11:48:35.273539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.195 [2024-06-10 11:48:35.273553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.195 [2024-06-10 11:48:35.273795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.195 [2024-06-10 11:48:35.274033] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.195 [2024-06-10 11:48:35.274047] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.195 [2024-06-10 11:48:35.274059] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.195 [2024-06-10 11:48:35.277784] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.195 [2024-06-10 11:48:35.287010] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.195 [2024-06-10 11:48:35.287594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.195 [2024-06-10 11:48:35.287617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.195 [2024-06-10 11:48:35.287630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.195 [2024-06-10 11:48:35.287865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.195 [2024-06-10 11:48:35.288102] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.195 [2024-06-10 11:48:35.288116] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.195 [2024-06-10 11:48:35.288128] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.195 [2024-06-10 11:48:35.291857] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.455 [2024-06-10 11:48:35.301094] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.455 [2024-06-10 11:48:35.301673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-06-10 11:48:35.301698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.455 [2024-06-10 11:48:35.301711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.455 [2024-06-10 11:48:35.301949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.455 [2024-06-10 11:48:35.302186] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.455 [2024-06-10 11:48:35.302200] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.455 [2024-06-10 11:48:35.302212] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.455 [2024-06-10 11:48:35.305946] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.455 [2024-06-10 11:48:35.315178] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.455 [2024-06-10 11:48:35.315774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-06-10 11:48:35.315824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.455 [2024-06-10 11:48:35.315856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.455 [2024-06-10 11:48:35.316430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.455 [2024-06-10 11:48:35.316675] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.455 [2024-06-10 11:48:35.316690] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.455 [2024-06-10 11:48:35.316703] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.455 [2024-06-10 11:48:35.320425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.455 [2024-06-10 11:48:35.329216] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.455 [2024-06-10 11:48:35.329805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-06-10 11:48:35.329857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.455 [2024-06-10 11:48:35.329898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.455 [2024-06-10 11:48:35.330446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.455 [2024-06-10 11:48:35.330691] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.455 [2024-06-10 11:48:35.330706] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.455 [2024-06-10 11:48:35.330718] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.455 [2024-06-10 11:48:35.334445] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.455 [2024-06-10 11:48:35.343245] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.455 [2024-06-10 11:48:35.343809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-06-10 11:48:35.343860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.455 [2024-06-10 11:48:35.343893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.455 [2024-06-10 11:48:35.344452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.455 [2024-06-10 11:48:35.344696] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.455 [2024-06-10 11:48:35.344711] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.455 [2024-06-10 11:48:35.344723] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.455 [2024-06-10 11:48:35.348455] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.455 [2024-06-10 11:48:35.357254] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.455 [2024-06-10 11:48:35.357843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.455 [2024-06-10 11:48:35.357894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.455 [2024-06-10 11:48:35.357927] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.455 [2024-06-10 11:48:35.358515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.455 [2024-06-10 11:48:35.358989] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.455 [2024-06-10 11:48:35.359004] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.455 [2024-06-10 11:48:35.359016] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.455 [2024-06-10 11:48:35.362747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.455 [2024-06-10 11:48:35.371329] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.455 [2024-06-10 11:48:35.371851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-06-10 11:48:35.371874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.456 [2024-06-10 11:48:35.371887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.456 [2024-06-10 11:48:35.372122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.456 [2024-06-10 11:48:35.372359] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.456 [2024-06-10 11:48:35.372379] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.456 [2024-06-10 11:48:35.372394] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.456 [2024-06-10 11:48:35.376128] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.456 [2024-06-10 11:48:35.385372] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.456 [2024-06-10 11:48:35.385889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-06-10 11:48:35.385912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.456 [2024-06-10 11:48:35.385925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.456 [2024-06-10 11:48:35.386159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.456 [2024-06-10 11:48:35.386396] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.456 [2024-06-10 11:48:35.386412] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.456 [2024-06-10 11:48:35.386424] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.456 [2024-06-10 11:48:35.390162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.456 [2024-06-10 11:48:35.399410] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.456 [2024-06-10 11:48:35.399996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-06-10 11:48:35.400048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.456 [2024-06-10 11:48:35.400080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.456 [2024-06-10 11:48:35.400640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.456 [2024-06-10 11:48:35.400877] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.456 [2024-06-10 11:48:35.400891] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.456 [2024-06-10 11:48:35.400903] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.456 [2024-06-10 11:48:35.404628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.456 [2024-06-10 11:48:35.413519] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.456 [2024-06-10 11:48:35.414082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-06-10 11:48:35.414106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.456 [2024-06-10 11:48:35.414120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.456 [2024-06-10 11:48:35.414357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.456 [2024-06-10 11:48:35.414600] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.456 [2024-06-10 11:48:35.414615] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.456 [2024-06-10 11:48:35.414627] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.456 [2024-06-10 11:48:35.418361] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.456 [2024-06-10 11:48:35.427624] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.456 [2024-06-10 11:48:35.428221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-06-10 11:48:35.428244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.456 [2024-06-10 11:48:35.428257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.456 [2024-06-10 11:48:35.428494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.456 [2024-06-10 11:48:35.428737] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.456 [2024-06-10 11:48:35.428752] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.456 [2024-06-10 11:48:35.428764] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.456 [2024-06-10 11:48:35.432492] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.456 [2024-06-10 11:48:35.441742] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.456 [2024-06-10 11:48:35.442249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-06-10 11:48:35.442299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.456 [2024-06-10 11:48:35.442331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.456 [2024-06-10 11:48:35.442822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.456 [2024-06-10 11:48:35.443060] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.456 [2024-06-10 11:48:35.443074] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.456 [2024-06-10 11:48:35.443087] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.456 [2024-06-10 11:48:35.446820] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.456 [2024-06-10 11:48:35.455850] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.456 [2024-06-10 11:48:35.456335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-06-10 11:48:35.456357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.456 [2024-06-10 11:48:35.456370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.456 [2024-06-10 11:48:35.456612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.456 [2024-06-10 11:48:35.456850] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.456 [2024-06-10 11:48:35.456864] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.456 [2024-06-10 11:48:35.456877] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.456 [2024-06-10 11:48:35.460607] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.456 [2024-06-10 11:48:35.469847] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.456 [2024-06-10 11:48:35.470407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-06-10 11:48:35.470462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.456 [2024-06-10 11:48:35.470494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.456 [2024-06-10 11:48:35.471030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.456 [2024-06-10 11:48:35.471269] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.456 [2024-06-10 11:48:35.471283] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.456 [2024-06-10 11:48:35.471295] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.456 [2024-06-10 11:48:35.475028] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.456 [2024-06-10 11:48:35.484053] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.456 [2024-06-10 11:48:35.484527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-06-10 11:48:35.484549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.456 [2024-06-10 11:48:35.484562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.456 [2024-06-10 11:48:35.484805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.456 [2024-06-10 11:48:35.485043] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.456 [2024-06-10 11:48:35.485057] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.456 [2024-06-10 11:48:35.485069] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.456 [2024-06-10 11:48:35.488802] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.456 [2024-06-10 11:48:35.498049] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.456 [2024-06-10 11:48:35.498554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.456 [2024-06-10 11:48:35.498616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.457 [2024-06-10 11:48:35.498648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.457 [2024-06-10 11:48:35.499169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.457 [2024-06-10 11:48:35.499470] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.457 [2024-06-10 11:48:35.499493] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.457 [2024-06-10 11:48:35.499514] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.457 [2024-06-10 11:48:35.505751] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.457 [2024-06-10 11:48:35.513123] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.457 [2024-06-10 11:48:35.513681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-06-10 11:48:35.513706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.457 [2024-06-10 11:48:35.513720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.457 [2024-06-10 11:48:35.513976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.457 [2024-06-10 11:48:35.514232] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.457 [2024-06-10 11:48:35.514248] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.457 [2024-06-10 11:48:35.514266] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.457 [2024-06-10 11:48:35.518323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.457 [2024-06-10 11:48:35.527221] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.457 [2024-06-10 11:48:35.527666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-06-10 11:48:35.527689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.457 [2024-06-10 11:48:35.527702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.457 [2024-06-10 11:48:35.527939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.457 [2024-06-10 11:48:35.528176] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.457 [2024-06-10 11:48:35.528191] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.457 [2024-06-10 11:48:35.528203] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.457 [2024-06-10 11:48:35.531938] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.457 [2024-06-10 11:48:35.541423] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.457 [2024-06-10 11:48:35.541866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-06-10 11:48:35.541917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.457 [2024-06-10 11:48:35.541949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.457 [2024-06-10 11:48:35.542536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.457 [2024-06-10 11:48:35.543142] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.457 [2024-06-10 11:48:35.543177] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.457 [2024-06-10 11:48:35.543190] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.457 [2024-06-10 11:48:35.546919] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.457 [2024-06-10 11:48:35.555494] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.457 [2024-06-10 11:48:35.556076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.457 [2024-06-10 11:48:35.556099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.457 [2024-06-10 11:48:35.556112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.457 [2024-06-10 11:48:35.556347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.457 [2024-06-10 11:48:35.556591] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.457 [2024-06-10 11:48:35.556606] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.457 [2024-06-10 11:48:35.556618] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.716 [2024-06-10 11:48:35.560342] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.716 [2024-06-10 11:48:35.569591] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.716 [2024-06-10 11:48:35.570093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.716 [2024-06-10 11:48:35.570120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.716 [2024-06-10 11:48:35.570133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.716 [2024-06-10 11:48:35.570369] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.716 [2024-06-10 11:48:35.570613] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.570627] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.570640] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.574365] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.583606] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.584182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.584204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.584217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.584452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.584695] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.584709] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.584722] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.588444] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.597680] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.598232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.598283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.598314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.598919] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.599430] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.599444] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.599456] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.603188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.611757] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.612331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.612391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.612423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.613028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.613553] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.613567] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.613584] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.617312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.625895] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.626356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.626406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.626438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.627051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.627289] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.627303] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.627315] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.631039] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.640073] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.640506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.640528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.640541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.640781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.641018] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.641032] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.641044] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.644775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.654374] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.654837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.654861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.654874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.655110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.655346] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.655360] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.655373] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.659113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.668591] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.669104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.669160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.669192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.669741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.669980] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.669994] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.670006] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.673736] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.682759] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.683269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.683292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.683304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.683541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.683785] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.683800] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.683812] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.687543] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.696794] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.697389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.697438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.697469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.698071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.698635] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.698649] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.698662] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.702385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.710977] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.711491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.711513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.711530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.711772] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.712010] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.712024] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.712037] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.715772] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.725077] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.725700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.725723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.725737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.725971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.726209] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.726223] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.726235] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.729972] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.739210] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.739717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.739740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.739753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.739989] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.740225] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.740239] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.740252] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.743987] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.753234] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.753741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.753765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.753778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.754014] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.754250] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.754268] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.754280] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.758016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.767261] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.767843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.767866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.767879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.768116] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.768354] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.768368] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.768380] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.772124] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.781358] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.781963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.781986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.781999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.782235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.782472] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.782486] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.782498] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.786229] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.795476] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.796065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.796088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.796101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.796337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.796580] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.796595] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.796607] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.800335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.717 [2024-06-10 11:48:35.809587] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.717 [2024-06-10 11:48:35.810027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.717 [2024-06-10 11:48:35.810049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.717 [2024-06-10 11:48:35.810062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.717 [2024-06-10 11:48:35.810297] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.717 [2024-06-10 11:48:35.810534] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.717 [2024-06-10 11:48:35.810548] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.717 [2024-06-10 11:48:35.810560] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.717 [2024-06-10 11:48:35.814295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.978 [2024-06-10 11:48:35.823767] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.978 [2024-06-10 11:48:35.824138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.978 [2024-06-10 11:48:35.824161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.978 [2024-06-10 11:48:35.824174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.978 [2024-06-10 11:48:35.824409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.978 [2024-06-10 11:48:35.824653] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.978 [2024-06-10 11:48:35.824668] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.978 [2024-06-10 11:48:35.824680] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.978 [2024-06-10 11:48:35.828408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.978 [2024-06-10 11:48:35.837862] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.978 [2024-06-10 11:48:35.838430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.978 [2024-06-10 11:48:35.838480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.978 [2024-06-10 11:48:35.838511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.978 [2024-06-10 11:48:35.838977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.978 [2024-06-10 11:48:35.839216] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.978 [2024-06-10 11:48:35.839230] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.978 [2024-06-10 11:48:35.839242] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.978 [2024-06-10 11:48:35.842975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.978 [2024-06-10 11:48:35.852006] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.978 [2024-06-10 11:48:35.852457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.978 [2024-06-10 11:48:35.852507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.978 [2024-06-10 11:48:35.852539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.978 [2024-06-10 11:48:35.853076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.978 [2024-06-10 11:48:35.853314] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.978 [2024-06-10 11:48:35.853329] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.978 [2024-06-10 11:48:35.853341] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.978 [2024-06-10 11:48:35.857069] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.978 [2024-06-10 11:48:35.866087] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.978 [2024-06-10 11:48:35.866675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.978 [2024-06-10 11:48:35.866728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.978 [2024-06-10 11:48:35.866760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.978 [2024-06-10 11:48:35.867163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.978 [2024-06-10 11:48:35.867401] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.978 [2024-06-10 11:48:35.867415] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.978 [2024-06-10 11:48:35.867428] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.978 [2024-06-10 11:48:35.871170] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.978 [2024-06-10 11:48:35.880192] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.978 [2024-06-10 11:48:35.880647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.978 [2024-06-10 11:48:35.880670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.978 [2024-06-10 11:48:35.880683] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.978 [2024-06-10 11:48:35.880920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.978 [2024-06-10 11:48:35.881156] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.978 [2024-06-10 11:48:35.881171] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.978 [2024-06-10 11:48:35.881183] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.978 [2024-06-10 11:48:35.884915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.978 [2024-06-10 11:48:35.894372] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.978 [2024-06-10 11:48:35.894933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.978 [2024-06-10 11:48:35.894956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.978 [2024-06-10 11:48:35.894969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.978 [2024-06-10 11:48:35.895206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.978 [2024-06-10 11:48:35.895442] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.978 [2024-06-10 11:48:35.895456] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.978 [2024-06-10 11:48:35.895472] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.978 [2024-06-10 11:48:35.899205] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.978 [2024-06-10 11:48:35.908453] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.978 [2024-06-10 11:48:35.909037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.978 [2024-06-10 11:48:35.909060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.978 [2024-06-10 11:48:35.909073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.978 [2024-06-10 11:48:35.909310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.978 [2024-06-10 11:48:35.909545] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.978 [2024-06-10 11:48:35.909559] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.978 [2024-06-10 11:48:35.909572] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.978 [2024-06-10 11:48:35.913304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.978 [2024-06-10 11:48:35.922545] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.978 [2024-06-10 11:48:35.923128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.978 [2024-06-10 11:48:35.923177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.978 [2024-06-10 11:48:35.923210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.978 [2024-06-10 11:48:35.923812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.978 [2024-06-10 11:48:35.924381] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.978 [2024-06-10 11:48:35.924404] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.978 [2024-06-10 11:48:35.924425] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.978 [2024-06-10 11:48:35.930667] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.978 [2024-06-10 11:48:35.937823] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.978 [2024-06-10 11:48:35.938285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.978 [2024-06-10 11:48:35.938310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.978 [2024-06-10 11:48:35.938324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.979 [2024-06-10 11:48:35.938603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.979 [2024-06-10 11:48:35.938862] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.979 [2024-06-10 11:48:35.938877] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.979 [2024-06-10 11:48:35.938891] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.979 [2024-06-10 11:48:35.942947] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.979 [2024-06-10 11:48:35.951938] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.979 [2024-06-10 11:48:35.952489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.979 [2024-06-10 11:48:35.952538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.979 [2024-06-10 11:48:35.952570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.979 [2024-06-10 11:48:35.953051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.979 [2024-06-10 11:48:35.953289] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.979 [2024-06-10 11:48:35.953303] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.979 [2024-06-10 11:48:35.953315] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.979 [2024-06-10 11:48:35.957040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.979 [2024-06-10 11:48:35.966084] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.979 [2024-06-10 11:48:35.966597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.979 [2024-06-10 11:48:35.966620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.979 [2024-06-10 11:48:35.966634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.979 [2024-06-10 11:48:35.966871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.979 [2024-06-10 11:48:35.967108] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.979 [2024-06-10 11:48:35.967123] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.979 [2024-06-10 11:48:35.967135] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.979 [2024-06-10 11:48:35.970872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.979 [2024-06-10 11:48:35.980115] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.979 [2024-06-10 11:48:35.980816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.979 [2024-06-10 11:48:35.980870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.979 [2024-06-10 11:48:35.980902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.979 [2024-06-10 11:48:35.981271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.979 [2024-06-10 11:48:35.981508] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.979 [2024-06-10 11:48:35.981523] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.979 [2024-06-10 11:48:35.981535] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.979 [2024-06-10 11:48:35.985270] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.979 [2024-06-10 11:48:35.994289] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.979 [2024-06-10 11:48:35.994854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.979 [2024-06-10 11:48:35.994906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.979 [2024-06-10 11:48:35.994938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.979 [2024-06-10 11:48:35.995475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.979 [2024-06-10 11:48:35.995879] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.979 [2024-06-10 11:48:35.995903] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.979 [2024-06-10 11:48:35.995923] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.979 [2024-06-10 11:48:36.002157] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.979 [2024-06-10 11:48:36.009003] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.979 [2024-06-10 11:48:36.009616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.979 [2024-06-10 11:48:36.009667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.979 [2024-06-10 11:48:36.009700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.979 [2024-06-10 11:48:36.010225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.979 [2024-06-10 11:48:36.010483] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.979 [2024-06-10 11:48:36.010498] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.979 [2024-06-10 11:48:36.010512] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.979 [2024-06-10 11:48:36.014569] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.979 [2024-06-10 11:48:36.023033] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.979 [2024-06-10 11:48:36.023631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.979 [2024-06-10 11:48:36.023682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.979 [2024-06-10 11:48:36.023716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.979 [2024-06-10 11:48:36.024304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.979 [2024-06-10 11:48:36.024603] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.979 [2024-06-10 11:48:36.024617] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.979 [2024-06-10 11:48:36.024630] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.979 [2024-06-10 11:48:36.028348] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.979 [2024-06-10 11:48:36.037158] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.979 [2024-06-10 11:48:36.037748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.979 [2024-06-10 11:48:36.037799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.979 [2024-06-10 11:48:36.037831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.979 [2024-06-10 11:48:36.038225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.979 [2024-06-10 11:48:36.038462] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.979 [2024-06-10 11:48:36.038476] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.979 [2024-06-10 11:48:36.038492] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.979 [2024-06-10 11:48:36.042227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.979 [2024-06-10 11:48:36.051249] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.979 [2024-06-10 11:48:36.051836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.979 [2024-06-10 11:48:36.051888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.979 [2024-06-10 11:48:36.051920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.979 [2024-06-10 11:48:36.052400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.979 [2024-06-10 11:48:36.052647] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.979 [2024-06-10 11:48:36.052662] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.979 [2024-06-10 11:48:36.052674] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.979 [2024-06-10 11:48:36.056396] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.979 [2024-06-10 11:48:36.065416] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.979 [2024-06-10 11:48:36.065932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.979 [2024-06-10 11:48:36.065955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.979 [2024-06-10 11:48:36.065968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.979 [2024-06-10 11:48:36.066203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.979 [2024-06-10 11:48:36.066440] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.979 [2024-06-10 11:48:36.066454] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.979 [2024-06-10 11:48:36.066466] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:10.979 [2024-06-10 11:48:36.070207] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:10.979 [2024-06-10 11:48:36.079449] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:10.979 [2024-06-10 11:48:36.080033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:10.979 [2024-06-10 11:48:36.080056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:10.979 [2024-06-10 11:48:36.080069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:10.979 [2024-06-10 11:48:36.080305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:10.979 [2024-06-10 11:48:36.080542] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:10.980 [2024-06-10 11:48:36.080556] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:10.980 [2024-06-10 11:48:36.080569] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.240 [2024-06-10 11:48:36.084305] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.240 [2024-06-10 11:48:36.093542] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.240 [2024-06-10 11:48:36.094138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.240 [2024-06-10 11:48:36.094196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.240 [2024-06-10 11:48:36.094229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.240 [2024-06-10 11:48:36.094722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.240 [2024-06-10 11:48:36.094961] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.240 [2024-06-10 11:48:36.094975] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.240 [2024-06-10 11:48:36.094988] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.240 [2024-06-10 11:48:36.098719] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.240 [2024-06-10 11:48:36.107735] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.240 [2024-06-10 11:48:36.108180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.240 [2024-06-10 11:48:36.108202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.240 [2024-06-10 11:48:36.108216] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.240 [2024-06-10 11:48:36.108452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.240 [2024-06-10 11:48:36.108696] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.240 [2024-06-10 11:48:36.108711] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.240 [2024-06-10 11:48:36.108724] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.240 [2024-06-10 11:48:36.112453] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.240 [2024-06-10 11:48:36.121912] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.240 [2024-06-10 11:48:36.122419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.240 [2024-06-10 11:48:36.122440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.240 [2024-06-10 11:48:36.122453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.240 [2024-06-10 11:48:36.122694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.240 [2024-06-10 11:48:36.122931] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.240 [2024-06-10 11:48:36.122945] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.240 [2024-06-10 11:48:36.122957] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.240 [2024-06-10 11:48:36.126687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.240 [2024-06-10 11:48:36.135939] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.240 [2024-06-10 11:48:36.136505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.240 [2024-06-10 11:48:36.136556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.240 [2024-06-10 11:48:36.136600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.240 [2024-06-10 11:48:36.137169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.240 [2024-06-10 11:48:36.137569] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.240 [2024-06-10 11:48:36.137598] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.240 [2024-06-10 11:48:36.137619] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.240 [2024-06-10 11:48:36.143857] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.240 [2024-06-10 11:48:36.150761] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.240 [2024-06-10 11:48:36.151338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.240 [2024-06-10 11:48:36.151362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.240 [2024-06-10 11:48:36.151377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.240 [2024-06-10 11:48:36.151640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.240 [2024-06-10 11:48:36.151898] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.240 [2024-06-10 11:48:36.151913] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.240 [2024-06-10 11:48:36.151926] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.240 [2024-06-10 11:48:36.155988] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.240 [2024-06-10 11:48:36.164931] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.240 [2024-06-10 11:48:36.165437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.240 [2024-06-10 11:48:36.165460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.240 [2024-06-10 11:48:36.165473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.240 [2024-06-10 11:48:36.165715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.240 [2024-06-10 11:48:36.165952] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.240 [2024-06-10 11:48:36.165966] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.240 [2024-06-10 11:48:36.165979] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.240 [2024-06-10 11:48:36.169713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.240 [2024-06-10 11:48:36.178961] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.240 [2024-06-10 11:48:36.179547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.240 [2024-06-10 11:48:36.179613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.240 [2024-06-10 11:48:36.179648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.240 [2024-06-10 11:48:36.180235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.240 [2024-06-10 11:48:36.180763] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.240 [2024-06-10 11:48:36.180778] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.240 [2024-06-10 11:48:36.180791] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.240 [2024-06-10 11:48:36.184519] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.240 [2024-06-10 11:48:36.193094] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.240 [2024-06-10 11:48:36.193672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.240 [2024-06-10 11:48:36.193695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.240 [2024-06-10 11:48:36.193708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.240 [2024-06-10 11:48:36.193943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.240 [2024-06-10 11:48:36.194179] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.240 [2024-06-10 11:48:36.194194] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.241 [2024-06-10 11:48:36.194206] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.241 [2024-06-10 11:48:36.197940] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.241 [2024-06-10 11:48:36.207195] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.241 [2024-06-10 11:48:36.207786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.241 [2024-06-10 11:48:36.207838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.241 [2024-06-10 11:48:36.207870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.241 [2024-06-10 11:48:36.208465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.241 [2024-06-10 11:48:36.208708] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.241 [2024-06-10 11:48:36.208723] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.241 [2024-06-10 11:48:36.208735] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.241 [2024-06-10 11:48:36.212464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.241 [2024-06-10 11:48:36.221269] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.241 [2024-06-10 11:48:36.221849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.241 [2024-06-10 11:48:36.221872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.241 [2024-06-10 11:48:36.221885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.241 [2024-06-10 11:48:36.222121] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.241 [2024-06-10 11:48:36.222359] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.241 [2024-06-10 11:48:36.222374] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.241 [2024-06-10 11:48:36.222386] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.241 [2024-06-10 11:48:36.226117] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.241 [2024-06-10 11:48:36.235345] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.241 [2024-06-10 11:48:36.235921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.241 [2024-06-10 11:48:36.235973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.241 [2024-06-10 11:48:36.236013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.241 [2024-06-10 11:48:36.236666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.241 [2024-06-10 11:48:36.237113] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.241 [2024-06-10 11:48:36.237127] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.241 [2024-06-10 11:48:36.237139] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.241 [2024-06-10 11:48:36.240873] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.241 [2024-06-10 11:48:36.249447] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.241 [2024-06-10 11:48:36.250025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.241 [2024-06-10 11:48:36.250078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.241 [2024-06-10 11:48:36.250110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.241 [2024-06-10 11:48:36.250714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.241 [2024-06-10 11:48:36.251038] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.241 [2024-06-10 11:48:36.251052] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.241 [2024-06-10 11:48:36.251065] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.241 [2024-06-10 11:48:36.254799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.241 [2024-06-10 11:48:36.263596] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.241 [2024-06-10 11:48:36.264177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.241 [2024-06-10 11:48:36.264199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.241 [2024-06-10 11:48:36.264212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.241 [2024-06-10 11:48:36.264448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.241 [2024-06-10 11:48:36.264691] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.241 [2024-06-10 11:48:36.264706] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.241 [2024-06-10 11:48:36.264719] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.241 [2024-06-10 11:48:36.268448] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.241 [2024-06-10 11:48:36.277699] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.241 [2024-06-10 11:48:36.278262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.241 [2024-06-10 11:48:36.278285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.241 [2024-06-10 11:48:36.278299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.241 [2024-06-10 11:48:36.278534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.241 [2024-06-10 11:48:36.278778] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.241 [2024-06-10 11:48:36.278796] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.241 [2024-06-10 11:48:36.278809] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.241 [2024-06-10 11:48:36.282538] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.241 [2024-06-10 11:48:36.291783] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.241 [2024-06-10 11:48:36.292354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.241 [2024-06-10 11:48:36.292376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.241 [2024-06-10 11:48:36.292389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.241 [2024-06-10 11:48:36.292631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.241 [2024-06-10 11:48:36.292870] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.241 [2024-06-10 11:48:36.292884] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.241 [2024-06-10 11:48:36.292896] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.241 [2024-06-10 11:48:36.296625] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.241 [2024-06-10 11:48:36.305858] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.241 [2024-06-10 11:48:36.306445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.241 [2024-06-10 11:48:36.306496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.241 [2024-06-10 11:48:36.306528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.241 [2024-06-10 11:48:36.307107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.241 [2024-06-10 11:48:36.307345] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.241 [2024-06-10 11:48:36.307359] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.241 [2024-06-10 11:48:36.307371] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.241 [2024-06-10 11:48:36.311099] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.241 [2024-06-10 11:48:36.319886] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.241 [2024-06-10 11:48:36.320469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.241 [2024-06-10 11:48:36.320519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.241 [2024-06-10 11:48:36.320550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.241 [2024-06-10 11:48:36.321098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.241 [2024-06-10 11:48:36.321335] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.241 [2024-06-10 11:48:36.321349] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.241 [2024-06-10 11:48:36.321362] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.241 [2024-06-10 11:48:36.325083] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.241 [2024-06-10 11:48:36.333879] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.241 [2024-06-10 11:48:36.334446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.241 [2024-06-10 11:48:36.334468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.241 [2024-06-10 11:48:36.334481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.241 [2024-06-10 11:48:36.334723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.242 [2024-06-10 11:48:36.334961] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.242 [2024-06-10 11:48:36.334975] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.242 [2024-06-10 11:48:36.334987] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.242 [2024-06-10 11:48:36.338714] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.502 [2024-06-10 11:48:36.347954] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.502 [2024-06-10 11:48:36.348532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.502 [2024-06-10 11:48:36.348554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.502 [2024-06-10 11:48:36.348567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.502 [2024-06-10 11:48:36.348809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.502 [2024-06-10 11:48:36.349046] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.502 [2024-06-10 11:48:36.349060] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.502 [2024-06-10 11:48:36.349072] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.502 [2024-06-10 11:48:36.352804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.502 [2024-06-10 11:48:36.362046] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.502 [2024-06-10 11:48:36.362631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.502 [2024-06-10 11:48:36.362655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.502 [2024-06-10 11:48:36.362668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.502 [2024-06-10 11:48:36.362904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.502 [2024-06-10 11:48:36.363141] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.502 [2024-06-10 11:48:36.363155] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.502 [2024-06-10 11:48:36.363167] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.502 [2024-06-10 11:48:36.366899] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.502 [2024-06-10 11:48:36.376149] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.502 [2024-06-10 11:48:36.376708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.502 [2024-06-10 11:48:36.376731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.502 [2024-06-10 11:48:36.376744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.502 [2024-06-10 11:48:36.376985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.502 [2024-06-10 11:48:36.377221] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.502 [2024-06-10 11:48:36.377235] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.502 [2024-06-10 11:48:36.377247] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.502 [2024-06-10 11:48:36.380988] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.502 [2024-06-10 11:48:36.390222] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.502 [2024-06-10 11:48:36.390773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.502 [2024-06-10 11:48:36.390796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.502 [2024-06-10 11:48:36.390809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.502 [2024-06-10 11:48:36.391044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.502 [2024-06-10 11:48:36.391281] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.502 [2024-06-10 11:48:36.391294] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.502 [2024-06-10 11:48:36.391307] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.502 [2024-06-10 11:48:36.395040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.502 [2024-06-10 11:48:36.404271] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.502 [2024-06-10 11:48:36.404845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.502 [2024-06-10 11:48:36.404868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.502 [2024-06-10 11:48:36.404881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.502 [2024-06-10 11:48:36.405117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.502 [2024-06-10 11:48:36.405619] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.502 [2024-06-10 11:48:36.405637] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.502 [2024-06-10 11:48:36.405650] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.502 [2024-06-10 11:48:36.409381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.502 [2024-06-10 11:48:36.418399] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.502 [2024-06-10 11:48:36.418962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.502 [2024-06-10 11:48:36.418986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.502 [2024-06-10 11:48:36.419000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.502 [2024-06-10 11:48:36.419237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.502 [2024-06-10 11:48:36.419473] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.502 [2024-06-10 11:48:36.419487] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.502 [2024-06-10 11:48:36.419504] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.502 [2024-06-10 11:48:36.423237] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.502 [2024-06-10 11:48:36.432473] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.502 [2024-06-10 11:48:36.433052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.502 [2024-06-10 11:48:36.433093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.502 [2024-06-10 11:48:36.433106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.502 [2024-06-10 11:48:36.433342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.502 [2024-06-10 11:48:36.433587] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.502 [2024-06-10 11:48:36.433602] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.502 [2024-06-10 11:48:36.433614] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.502 [2024-06-10 11:48:36.437345] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.502 [2024-06-10 11:48:36.446578] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.502 [2024-06-10 11:48:36.447127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.502 [2024-06-10 11:48:36.447149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.502 [2024-06-10 11:48:36.447162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.502 [2024-06-10 11:48:36.447397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.502 [2024-06-10 11:48:36.447641] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.502 [2024-06-10 11:48:36.447656] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.502 [2024-06-10 11:48:36.447669] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.502 [2024-06-10 11:48:36.451390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.502 [2024-06-10 11:48:36.460623] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.502 [2024-06-10 11:48:36.461194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.502 [2024-06-10 11:48:36.461244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.502 [2024-06-10 11:48:36.461276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.502 [2024-06-10 11:48:36.461880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.502 [2024-06-10 11:48:36.462459] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.502 [2024-06-10 11:48:36.462473] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.503 [2024-06-10 11:48:36.462485] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.503 [2024-06-10 11:48:36.466216] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.503 [2024-06-10 11:48:36.474794] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.503 [2024-06-10 11:48:36.475380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.503 [2024-06-10 11:48:36.475402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.503 [2024-06-10 11:48:36.475416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.503 [2024-06-10 11:48:36.475659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.503 [2024-06-10 11:48:36.475896] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.503 [2024-06-10 11:48:36.475910] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.503 [2024-06-10 11:48:36.475922] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.503 [2024-06-10 11:48:36.479644] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.503 [2024-06-10 11:48:36.488883] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.503 [2024-06-10 11:48:36.489442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.503 [2024-06-10 11:48:36.489495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.503 [2024-06-10 11:48:36.489528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.503 [2024-06-10 11:48:36.490131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.503 [2024-06-10 11:48:36.490439] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.503 [2024-06-10 11:48:36.490462] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.503 [2024-06-10 11:48:36.490482] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.503 [2024-06-10 11:48:36.496713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.503 [2024-06-10 11:48:36.503825] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.503 [2024-06-10 11:48:36.504409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.503 [2024-06-10 11:48:36.504459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.503 [2024-06-10 11:48:36.504490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.503 [2024-06-10 11:48:36.504923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.503 [2024-06-10 11:48:36.505181] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.503 [2024-06-10 11:48:36.505196] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.503 [2024-06-10 11:48:36.505210] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.503 [2024-06-10 11:48:36.509261] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.503 [2024-06-10 11:48:36.517934] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.503 [2024-06-10 11:48:36.518491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.503 [2024-06-10 11:48:36.518513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.503 [2024-06-10 11:48:36.518526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.503 [2024-06-10 11:48:36.518768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.503 [2024-06-10 11:48:36.519012] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.503 [2024-06-10 11:48:36.519026] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.503 [2024-06-10 11:48:36.519038] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.503 [2024-06-10 11:48:36.522765] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.503 [2024-06-10 11:48:36.532003] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.503 [2024-06-10 11:48:36.532573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.503 [2024-06-10 11:48:36.532637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.503 [2024-06-10 11:48:36.532669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.503 [2024-06-10 11:48:36.533256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.503 [2024-06-10 11:48:36.533540] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.503 [2024-06-10 11:48:36.533554] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.503 [2024-06-10 11:48:36.533566] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.503 [2024-06-10 11:48:36.537298] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.503 [2024-06-10 11:48:36.546097] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.503 [2024-06-10 11:48:36.546655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.503 [2024-06-10 11:48:36.546705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.503 [2024-06-10 11:48:36.546737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.503 [2024-06-10 11:48:36.547324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.503 [2024-06-10 11:48:36.547585] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.503 [2024-06-10 11:48:36.547600] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.503 [2024-06-10 11:48:36.547613] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.503 [2024-06-10 11:48:36.551332] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.503 [2024-06-10 11:48:36.560135] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.503 [2024-06-10 11:48:36.560691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.503 [2024-06-10 11:48:36.560714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.503 [2024-06-10 11:48:36.560727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.503 [2024-06-10 11:48:36.560963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.503 [2024-06-10 11:48:36.561199] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.503 [2024-06-10 11:48:36.561213] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.503 [2024-06-10 11:48:36.561225] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.503 [2024-06-10 11:48:36.564960] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.503 [2024-06-10 11:48:36.574205] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.503 [2024-06-10 11:48:36.574782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.503 [2024-06-10 11:48:36.574832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.503 [2024-06-10 11:48:36.574864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.503 [2024-06-10 11:48:36.575450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.503 [2024-06-10 11:48:36.575923] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.503 [2024-06-10 11:48:36.575938] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.503 [2024-06-10 11:48:36.575951] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.503 [2024-06-10 11:48:36.579672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.503 [2024-06-10 11:48:36.588237] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.503 [2024-06-10 11:48:36.588801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.503 [2024-06-10 11:48:36.588853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.503 [2024-06-10 11:48:36.588885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.503 [2024-06-10 11:48:36.589475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.503 [2024-06-10 11:48:36.589967] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.503 [2024-06-10 11:48:36.589982] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.503 [2024-06-10 11:48:36.589995] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.503 [2024-06-10 11:48:36.593716] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.503 [2024-06-10 11:48:36.602295] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.503 [2024-06-10 11:48:36.602852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.503 [2024-06-10 11:48:36.602875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.503 [2024-06-10 11:48:36.602888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.503 [2024-06-10 11:48:36.603123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.503 [2024-06-10 11:48:36.603358] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.503 [2024-06-10 11:48:36.603372] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.504 [2024-06-10 11:48:36.603385] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.764 [2024-06-10 11:48:36.607115] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.764 [2024-06-10 11:48:36.616358] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.764 [2024-06-10 11:48:36.616927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.764 [2024-06-10 11:48:36.616987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.764 [2024-06-10 11:48:36.617020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.764 [2024-06-10 11:48:36.617543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.764 [2024-06-10 11:48:36.617787] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.764 [2024-06-10 11:48:36.617802] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.764 [2024-06-10 11:48:36.617815] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.764 [2024-06-10 11:48:36.621543] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.764 [2024-06-10 11:48:36.630559] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.764 [2024-06-10 11:48:36.631124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.764 [2024-06-10 11:48:36.631174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.764 [2024-06-10 11:48:36.631206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.764 [2024-06-10 11:48:36.631660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.764 [2024-06-10 11:48:36.631948] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.764 [2024-06-10 11:48:36.631971] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.764 [2024-06-10 11:48:36.631992] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.764 [2024-06-10 11:48:36.638220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.764 [2024-06-10 11:48:36.645547] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.764 [2024-06-10 11:48:36.646144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.764 [2024-06-10 11:48:36.646169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.764 [2024-06-10 11:48:36.646183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.764 [2024-06-10 11:48:36.646440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.764 [2024-06-10 11:48:36.646706] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.764 [2024-06-10 11:48:36.646722] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.764 [2024-06-10 11:48:36.646736] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.764 [2024-06-10 11:48:36.650790] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.764 [2024-06-10 11:48:36.659729] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.764 [2024-06-10 11:48:36.660305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.764 [2024-06-10 11:48:36.660355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.764 [2024-06-10 11:48:36.660387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.764 [2024-06-10 11:48:36.660809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.764 [2024-06-10 11:48:36.661051] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.764 [2024-06-10 11:48:36.661065] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.764 [2024-06-10 11:48:36.661077] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.764 [2024-06-10 11:48:36.664808] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.764 [2024-06-10 11:48:36.673846] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.764 [2024-06-10 11:48:36.674420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.764 [2024-06-10 11:48:36.674469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.764 [2024-06-10 11:48:36.674501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.764 [2024-06-10 11:48:36.674996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.764 [2024-06-10 11:48:36.675233] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.764 [2024-06-10 11:48:36.675247] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.764 [2024-06-10 11:48:36.675259] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.764 [2024-06-10 11:48:36.679104] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.764 [2024-06-10 11:48:36.687924] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.764 [2024-06-10 11:48:36.688500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.764 [2024-06-10 11:48:36.688552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.764 [2024-06-10 11:48:36.688598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.764 [2024-06-10 11:48:36.688998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.764 [2024-06-10 11:48:36.689235] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.764 [2024-06-10 11:48:36.689249] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.764 [2024-06-10 11:48:36.689261] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.764 [2024-06-10 11:48:36.692997] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.764 [2024-06-10 11:48:36.702017] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.764 [2024-06-10 11:48:36.702600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.764 [2024-06-10 11:48:36.702623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.764 [2024-06-10 11:48:36.702637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.764 [2024-06-10 11:48:36.702872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.764 [2024-06-10 11:48:36.703108] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.764 [2024-06-10 11:48:36.703123] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.764 [2024-06-10 11:48:36.703135] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.764 [2024-06-10 11:48:36.706877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.764 [2024-06-10 11:48:36.716124] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.764 [2024-06-10 11:48:36.716687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.764 [2024-06-10 11:48:36.716710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.764 [2024-06-10 11:48:36.716723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.764 [2024-06-10 11:48:36.716958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.764 [2024-06-10 11:48:36.717196] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.764 [2024-06-10 11:48:36.717210] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.764 [2024-06-10 11:48:36.717222] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.764 [2024-06-10 11:48:36.720956] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.765 [2024-06-10 11:48:36.730202] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.765 [2024-06-10 11:48:36.730759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.765 [2024-06-10 11:48:36.730782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.765 [2024-06-10 11:48:36.730795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.765 [2024-06-10 11:48:36.731031] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.765 [2024-06-10 11:48:36.731267] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.765 [2024-06-10 11:48:36.731281] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.765 [2024-06-10 11:48:36.731293] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.765 [2024-06-10 11:48:36.735031] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.765 [2024-06-10 11:48:36.744278] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.765 [2024-06-10 11:48:36.744842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.765 [2024-06-10 11:48:36.744893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.765 [2024-06-10 11:48:36.744924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.765 [2024-06-10 11:48:36.745511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.765 [2024-06-10 11:48:36.746057] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.765 [2024-06-10 11:48:36.746072] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.765 [2024-06-10 11:48:36.746084] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.765 [2024-06-10 11:48:36.749810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.765 [2024-06-10 11:48:36.758388] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.765 [2024-06-10 11:48:36.758977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.765 [2024-06-10 11:48:36.759031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.765 [2024-06-10 11:48:36.759072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.765 [2024-06-10 11:48:36.759683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.765 [2024-06-10 11:48:36.760208] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.765 [2024-06-10 11:48:36.760222] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.765 [2024-06-10 11:48:36.760234] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.765 [2024-06-10 11:48:36.763965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.765 [2024-06-10 11:48:36.772548] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.765 [2024-06-10 11:48:36.773119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.765 [2024-06-10 11:48:36.773142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.765 [2024-06-10 11:48:36.773155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.765 [2024-06-10 11:48:36.773391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.765 [2024-06-10 11:48:36.773634] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.765 [2024-06-10 11:48:36.773649] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.765 [2024-06-10 11:48:36.773661] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.765 [2024-06-10 11:48:36.777385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.765 [2024-06-10 11:48:36.786639] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.765 [2024-06-10 11:48:36.787216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.765 [2024-06-10 11:48:36.787267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.765 [2024-06-10 11:48:36.787298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.765 [2024-06-10 11:48:36.787901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.765 [2024-06-10 11:48:36.788312] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.765 [2024-06-10 11:48:36.788326] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.765 [2024-06-10 11:48:36.788338] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.765 [2024-06-10 11:48:36.792068] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.765 [2024-06-10 11:48:36.800636] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.765 [2024-06-10 11:48:36.801193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.765 [2024-06-10 11:48:36.801215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.765 [2024-06-10 11:48:36.801228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.765 [2024-06-10 11:48:36.801464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.765 [2024-06-10 11:48:36.801708] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.765 [2024-06-10 11:48:36.801727] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.765 [2024-06-10 11:48:36.801739] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.765 [2024-06-10 11:48:36.805461] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.765 [2024-06-10 11:48:36.814712] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.765 [2024-06-10 11:48:36.815286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.765 [2024-06-10 11:48:36.815335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.765 [2024-06-10 11:48:36.815367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.765 [2024-06-10 11:48:36.815969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.765 [2024-06-10 11:48:36.816415] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.765 [2024-06-10 11:48:36.816429] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.765 [2024-06-10 11:48:36.816441] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.765 [2024-06-10 11:48:36.820167] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.765 [2024-06-10 11:48:36.828756] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.765 [2024-06-10 11:48:36.829321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.765 [2024-06-10 11:48:36.829371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.765 [2024-06-10 11:48:36.829403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.765 [2024-06-10 11:48:36.830004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.765 [2024-06-10 11:48:36.830423] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.765 [2024-06-10 11:48:36.830437] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.765 [2024-06-10 11:48:36.830449] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.765 [2024-06-10 11:48:36.834178] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.765 [2024-06-10 11:48:36.842738] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.765 [2024-06-10 11:48:36.843294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.765 [2024-06-10 11:48:36.843316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.765 [2024-06-10 11:48:36.843329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.765 [2024-06-10 11:48:36.843566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.765 [2024-06-10 11:48:36.843810] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.765 [2024-06-10 11:48:36.843824] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.765 [2024-06-10 11:48:36.843836] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.765 [2024-06-10 11:48:36.847558] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:11.765 [2024-06-10 11:48:36.856785] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:11.765 [2024-06-10 11:48:36.857369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:11.765 [2024-06-10 11:48:36.857419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:11.765 [2024-06-10 11:48:36.857450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:11.765 [2024-06-10 11:48:36.858053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:11.765 [2024-06-10 11:48:36.858493] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:11.765 [2024-06-10 11:48:36.858507] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:11.765 [2024-06-10 11:48:36.858519] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:11.765 [2024-06-10 11:48:36.862248] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.026 [2024-06-10 11:48:36.870831] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.026 [2024-06-10 11:48:36.871395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.026 [2024-06-10 11:48:36.871417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.026 [2024-06-10 11:48:36.871430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.026 [2024-06-10 11:48:36.871674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.026 [2024-06-10 11:48:36.871911] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.026 [2024-06-10 11:48:36.871925] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.026 [2024-06-10 11:48:36.871938] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.026 [2024-06-10 11:48:36.875671] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.026 [2024-06-10 11:48:36.884915] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.026 [2024-06-10 11:48:36.885470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.026 [2024-06-10 11:48:36.885492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.026 [2024-06-10 11:48:36.885505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.026 [2024-06-10 11:48:36.885749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.026 [2024-06-10 11:48:36.885987] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.026 [2024-06-10 11:48:36.886001] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.026 [2024-06-10 11:48:36.886013] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.026 [2024-06-10 11:48:36.889744] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.026 [2024-06-10 11:48:36.898981] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.026 [2024-06-10 11:48:36.899543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.026 [2024-06-10 11:48:36.899611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.026 [2024-06-10 11:48:36.899644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.026 [2024-06-10 11:48:36.900219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.026 [2024-06-10 11:48:36.900456] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.026 [2024-06-10 11:48:36.900469] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.026 [2024-06-10 11:48:36.900482] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.026 [2024-06-10 11:48:36.904207] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.026 [2024-06-10 11:48:36.912998] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.026 [2024-06-10 11:48:36.913564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.026 [2024-06-10 11:48:36.913592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.026 [2024-06-10 11:48:36.913605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.026 [2024-06-10 11:48:36.913842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.026 [2024-06-10 11:48:36.914079] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.026 [2024-06-10 11:48:36.914093] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.026 [2024-06-10 11:48:36.914105] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.026 [2024-06-10 11:48:36.917834] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.026 [2024-06-10 11:48:36.927068] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.026 [2024-06-10 11:48:36.927642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.026 [2024-06-10 11:48:36.927694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.026 [2024-06-10 11:48:36.927727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.026 [2024-06-10 11:48:36.928312] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.026 [2024-06-10 11:48:36.928571] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.026 [2024-06-10 11:48:36.928590] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.026 [2024-06-10 11:48:36.928603] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.026 [2024-06-10 11:48:36.932335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.026 [2024-06-10 11:48:36.941148] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.026 [2024-06-10 11:48:36.941722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.026 [2024-06-10 11:48:36.941772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.026 [2024-06-10 11:48:36.941804] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.026 [2024-06-10 11:48:36.942351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.026 [2024-06-10 11:48:36.942595] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.026 [2024-06-10 11:48:36.942610] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.026 [2024-06-10 11:48:36.942626] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.026 [2024-06-10 11:48:36.946350] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.026 [2024-06-10 11:48:36.955154] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.026 [2024-06-10 11:48:36.955741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.026 [2024-06-10 11:48:36.955806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.026 [2024-06-10 11:48:36.955840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.026 [2024-06-10 11:48:36.956426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.026 [2024-06-10 11:48:36.956815] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.026 [2024-06-10 11:48:36.956830] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.026 [2024-06-10 11:48:36.956842] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.027 [2024-06-10 11:48:36.960570] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.027 [2024-06-10 11:48:36.969143] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.027 [2024-06-10 11:48:36.969739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.027 [2024-06-10 11:48:36.969791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.027 [2024-06-10 11:48:36.969824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.027 [2024-06-10 11:48:36.970411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.027 [2024-06-10 11:48:36.970888] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.027 [2024-06-10 11:48:36.970903] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.027 [2024-06-10 11:48:36.970916] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.027 [2024-06-10 11:48:36.974647] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.027 [2024-06-10 11:48:36.983235] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.027 [2024-06-10 11:48:36.983827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.027 [2024-06-10 11:48:36.983878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.027 [2024-06-10 11:48:36.983911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.027 [2024-06-10 11:48:36.984445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.027 [2024-06-10 11:48:36.984690] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.027 [2024-06-10 11:48:36.984704] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.027 [2024-06-10 11:48:36.984717] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.027 [2024-06-10 11:48:36.988450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.027 [2024-06-10 11:48:36.997258] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.027 [2024-06-10 11:48:36.997848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.027 [2024-06-10 11:48:36.997906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.027 [2024-06-10 11:48:36.997938] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.027 [2024-06-10 11:48:36.998518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.027 [2024-06-10 11:48:36.998764] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.027 [2024-06-10 11:48:36.998778] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.027 [2024-06-10 11:48:36.998791] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.027 [2024-06-10 11:48:37.002522] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.027 [2024-06-10 11:48:37.011274] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.027 [2024-06-10 11:48:37.011895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.027 [2024-06-10 11:48:37.011947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.027 [2024-06-10 11:48:37.011979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.027 [2024-06-10 11:48:37.012484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.027 [2024-06-10 11:48:37.012728] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.027 [2024-06-10 11:48:37.012743] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.027 [2024-06-10 11:48:37.012755] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.027 [2024-06-10 11:48:37.016485] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.027 [2024-06-10 11:48:37.025289] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.027 [2024-06-10 11:48:37.025821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.027 [2024-06-10 11:48:37.025844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.027 [2024-06-10 11:48:37.025857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.027 [2024-06-10 11:48:37.026092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.027 [2024-06-10 11:48:37.026331] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.027 [2024-06-10 11:48:37.026345] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.027 [2024-06-10 11:48:37.026357] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.027 [2024-06-10 11:48:37.030086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.027 [2024-06-10 11:48:37.039359] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.027 [2024-06-10 11:48:37.039867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.027 [2024-06-10 11:48:37.039891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.027 [2024-06-10 11:48:37.039904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.027 [2024-06-10 11:48:37.040140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.027 [2024-06-10 11:48:37.040381] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.027 [2024-06-10 11:48:37.040396] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.027 [2024-06-10 11:48:37.040408] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.027 [2024-06-10 11:48:37.044145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.027 [2024-06-10 11:48:37.053394] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.027 [2024-06-10 11:48:37.053922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.027 [2024-06-10 11:48:37.053973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.027 [2024-06-10 11:48:37.054005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.027 [2024-06-10 11:48:37.054608] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.027 [2024-06-10 11:48:37.055057] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.027 [2024-06-10 11:48:37.055072] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.027 [2024-06-10 11:48:37.055084] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.027 [2024-06-10 11:48:37.058809] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.027 [2024-06-10 11:48:37.067385] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.027 [2024-06-10 11:48:37.067870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.027 [2024-06-10 11:48:37.067893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.027 [2024-06-10 11:48:37.067906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.027 [2024-06-10 11:48:37.068141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.027 [2024-06-10 11:48:37.068377] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.027 [2024-06-10 11:48:37.068391] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.027 [2024-06-10 11:48:37.068404] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.027 [2024-06-10 11:48:37.072147] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.027 [2024-06-10 11:48:37.081392] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.027 [2024-06-10 11:48:37.081944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.027 [2024-06-10 11:48:37.081996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.027 [2024-06-10 11:48:37.082028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.027 [2024-06-10 11:48:37.082629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.027 [2024-06-10 11:48:37.083091] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.027 [2024-06-10 11:48:37.083106] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.027 [2024-06-10 11:48:37.083118] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.027 [2024-06-10 11:48:37.086860] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.027 [2024-06-10 11:48:37.095435] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.027 [2024-06-10 11:48:37.095952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.028 [2024-06-10 11:48:37.095975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.028 [2024-06-10 11:48:37.095988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.028 [2024-06-10 11:48:37.096224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.028 [2024-06-10 11:48:37.096462] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.028 [2024-06-10 11:48:37.096477] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.028 [2024-06-10 11:48:37.096490] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.028 [2024-06-10 11:48:37.100224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.028 [2024-06-10 11:48:37.109468] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.028 [2024-06-10 11:48:37.109957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.028 [2024-06-10 11:48:37.109980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.028 [2024-06-10 11:48:37.109993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.028 [2024-06-10 11:48:37.110228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.028 [2024-06-10 11:48:37.110464] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.028 [2024-06-10 11:48:37.110478] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.028 [2024-06-10 11:48:37.110491] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.028 [2024-06-10 11:48:37.114219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.028 [2024-06-10 11:48:37.123456] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.028 [2024-06-10 11:48:37.124019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.028 [2024-06-10 11:48:37.124041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.028 [2024-06-10 11:48:37.124055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.028 [2024-06-10 11:48:37.124291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.028 [2024-06-10 11:48:37.124527] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.028 [2024-06-10 11:48:37.124542] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.028 [2024-06-10 11:48:37.124554] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.028 [2024-06-10 11:48:37.128290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.288 [2024-06-10 11:48:37.137541] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.288 [2024-06-10 11:48:37.137983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.288 [2024-06-10 11:48:37.138005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.288 [2024-06-10 11:48:37.138021] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.288 [2024-06-10 11:48:37.138257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.288 [2024-06-10 11:48:37.138494] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.288 [2024-06-10 11:48:37.138509] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.288 [2024-06-10 11:48:37.138522] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.288 [2024-06-10 11:48:37.142255] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.288 [2024-06-10 11:48:37.151742] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.288 [2024-06-10 11:48:37.152362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.288 [2024-06-10 11:48:37.152412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.288 [2024-06-10 11:48:37.152444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.288 [2024-06-10 11:48:37.152978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.288 [2024-06-10 11:48:37.153216] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.288 [2024-06-10 11:48:37.153230] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.288 [2024-06-10 11:48:37.153242] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.288 [2024-06-10 11:48:37.156979] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.288 [2024-06-10 11:48:37.165799] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.288 [2024-06-10 11:48:37.166374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.288 [2024-06-10 11:48:37.166397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.288 [2024-06-10 11:48:37.166410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.288 [2024-06-10 11:48:37.166654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.288 [2024-06-10 11:48:37.166891] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.288 [2024-06-10 11:48:37.166905] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.288 [2024-06-10 11:48:37.166917] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.288 [2024-06-10 11:48:37.170657] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.288 [2024-06-10 11:48:37.179931] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.288 [2024-06-10 11:48:37.180497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.288 [2024-06-10 11:48:37.180520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.288 [2024-06-10 11:48:37.180532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.288 [2024-06-10 11:48:37.180777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.288 [2024-06-10 11:48:37.181015] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.288 [2024-06-10 11:48:37.181033] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.288 [2024-06-10 11:48:37.181045] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.288 [2024-06-10 11:48:37.184777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.288 [2024-06-10 11:48:37.194038] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.288 [2024-06-10 11:48:37.194595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.288 [2024-06-10 11:48:37.194617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.288 [2024-06-10 11:48:37.194630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.288 [2024-06-10 11:48:37.194866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.288 [2024-06-10 11:48:37.195104] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.288 [2024-06-10 11:48:37.195119] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.288 [2024-06-10 11:48:37.195131] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.288 [2024-06-10 11:48:37.198868] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.288 [2024-06-10 11:48:37.208112] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.288 [2024-06-10 11:48:37.208701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.288 [2024-06-10 11:48:37.208752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.288 [2024-06-10 11:48:37.208784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.288 [2024-06-10 11:48:37.209277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.288 [2024-06-10 11:48:37.209514] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.288 [2024-06-10 11:48:37.209528] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.288 [2024-06-10 11:48:37.209540] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.288 [2024-06-10 11:48:37.213275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.288 [2024-06-10 11:48:37.222299] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.288 [2024-06-10 11:48:37.222861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.288 [2024-06-10 11:48:37.222883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.288 [2024-06-10 11:48:37.222896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.289 [2024-06-10 11:48:37.223132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.289 [2024-06-10 11:48:37.223369] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.289 [2024-06-10 11:48:37.223383] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.289 [2024-06-10 11:48:37.223395] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.289 [2024-06-10 11:48:37.227127] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.289 [2024-06-10 11:48:37.236393] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.289 [2024-06-10 11:48:37.236977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.289 [2024-06-10 11:48:37.237030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.289 [2024-06-10 11:48:37.237062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.289 [2024-06-10 11:48:37.237443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.289 [2024-06-10 11:48:37.237689] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.289 [2024-06-10 11:48:37.237704] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.289 [2024-06-10 11:48:37.237717] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.289 [2024-06-10 11:48:37.241450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.289 [2024-06-10 11:48:37.250482] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.289 [2024-06-10 11:48:37.250920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.289 [2024-06-10 11:48:37.250943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.289 [2024-06-10 11:48:37.250956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.289 [2024-06-10 11:48:37.251193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.289 [2024-06-10 11:48:37.251430] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.289 [2024-06-10 11:48:37.251444] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.289 [2024-06-10 11:48:37.251457] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.289 [2024-06-10 11:48:37.255194] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.289 [2024-06-10 11:48:37.264665] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.289 [2024-06-10 11:48:37.265171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.289 [2024-06-10 11:48:37.265193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.289 [2024-06-10 11:48:37.265206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.289 [2024-06-10 11:48:37.265442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.289 [2024-06-10 11:48:37.265688] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.289 [2024-06-10 11:48:37.265702] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.289 [2024-06-10 11:48:37.265715] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.289 [2024-06-10 11:48:37.269447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.289 [2024-06-10 11:48:37.278704] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.289 [2024-06-10 11:48:37.279150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.289 [2024-06-10 11:48:37.279172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.289 [2024-06-10 11:48:37.279189] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.289 [2024-06-10 11:48:37.279424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.289 [2024-06-10 11:48:37.279667] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.289 [2024-06-10 11:48:37.279682] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.289 [2024-06-10 11:48:37.279694] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.289 [2024-06-10 11:48:37.283420] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.289 [2024-06-10 11:48:37.292915] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.289 [2024-06-10 11:48:37.293500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.289 [2024-06-10 11:48:37.293551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.289 [2024-06-10 11:48:37.293596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.289 [2024-06-10 11:48:37.294183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.289 [2024-06-10 11:48:37.294659] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.289 [2024-06-10 11:48:37.294674] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.289 [2024-06-10 11:48:37.294687] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.289 [2024-06-10 11:48:37.298416] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.289 [2024-06-10 11:48:37.306999] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.289 [2024-06-10 11:48:37.307586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.289 [2024-06-10 11:48:37.307609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.289 [2024-06-10 11:48:37.307622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.289 [2024-06-10 11:48:37.307858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.289 [2024-06-10 11:48:37.308096] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.289 [2024-06-10 11:48:37.308110] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.289 [2024-06-10 11:48:37.308122] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.289 [2024-06-10 11:48:37.311857] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.289 [2024-06-10 11:48:37.321108] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.289 [2024-06-10 11:48:37.321706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.289 [2024-06-10 11:48:37.321758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.289 [2024-06-10 11:48:37.321791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.289 [2024-06-10 11:48:37.322378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.289 [2024-06-10 11:48:37.322756] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.289 [2024-06-10 11:48:37.322775] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.289 [2024-06-10 11:48:37.322787] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.289 [2024-06-10 11:48:37.326523] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.289 [2024-06-10 11:48:37.335114] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.289 [2024-06-10 11:48:37.335690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.289 [2024-06-10 11:48:37.335713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.289 [2024-06-10 11:48:37.335727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.289 [2024-06-10 11:48:37.335964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.289 [2024-06-10 11:48:37.336202] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.289 [2024-06-10 11:48:37.336216] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.289 [2024-06-10 11:48:37.336229] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.289 [2024-06-10 11:48:37.339959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.289 [2024-06-10 11:48:37.349211] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.289 [2024-06-10 11:48:37.349719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.289 [2024-06-10 11:48:37.349742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.289 [2024-06-10 11:48:37.349755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.289 [2024-06-10 11:48:37.349993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.289 [2024-06-10 11:48:37.350230] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.289 [2024-06-10 11:48:37.350244] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.289 [2024-06-10 11:48:37.350257] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.289 [2024-06-10 11:48:37.353991] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.289 [2024-06-10 11:48:37.363262] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.289 [2024-06-10 11:48:37.363837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.289 [2024-06-10 11:48:37.363889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.289 [2024-06-10 11:48:37.363920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.290 [2024-06-10 11:48:37.364429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.290 [2024-06-10 11:48:37.364672] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.290 [2024-06-10 11:48:37.364687] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.290 [2024-06-10 11:48:37.364699] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.290 [2024-06-10 11:48:37.368429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.290 [2024-06-10 11:48:37.377480] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.290 [2024-06-10 11:48:37.378008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.290 [2024-06-10 11:48:37.378031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.290 [2024-06-10 11:48:37.378043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.290 [2024-06-10 11:48:37.378278] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.290 [2024-06-10 11:48:37.378515] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.290 [2024-06-10 11:48:37.378530] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.290 [2024-06-10 11:48:37.378542] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.290 [2024-06-10 11:48:37.382282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.550 [2024-06-10 11:48:37.391538] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.550 [2024-06-10 11:48:37.392057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.550 [2024-06-10 11:48:37.392080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.550 [2024-06-10 11:48:37.392093] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.550 [2024-06-10 11:48:37.392329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.550 [2024-06-10 11:48:37.392565] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.550 [2024-06-10 11:48:37.392585] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.550 [2024-06-10 11:48:37.392598] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.550 [2024-06-10 11:48:37.396330] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.550 [2024-06-10 11:48:37.405824] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.551 [2024-06-10 11:48:37.406393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.551 [2024-06-10 11:48:37.406417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.551 [2024-06-10 11:48:37.406430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.551 [2024-06-10 11:48:37.406674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.551 [2024-06-10 11:48:37.406913] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.551 [2024-06-10 11:48:37.406927] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.551 [2024-06-10 11:48:37.406940] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.551 [2024-06-10 11:48:37.410683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.551 [2024-06-10 11:48:37.419925] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.551 [2024-06-10 11:48:37.420485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.551 [2024-06-10 11:48:37.420508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.551 [2024-06-10 11:48:37.420521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.551 [2024-06-10 11:48:37.420768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.551 [2024-06-10 11:48:37.421006] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.551 [2024-06-10 11:48:37.421020] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.551 [2024-06-10 11:48:37.421032] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.551 [2024-06-10 11:48:37.424760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.551 [2024-06-10 11:48:37.434014] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.551 [2024-06-10 11:48:37.434606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.551 [2024-06-10 11:48:37.434658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.551 [2024-06-10 11:48:37.434690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.551 [2024-06-10 11:48:37.435104] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.551 [2024-06-10 11:48:37.435341] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.551 [2024-06-10 11:48:37.435356] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.551 [2024-06-10 11:48:37.435368] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.551 [2024-06-10 11:48:37.439103] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.551 [2024-06-10 11:48:37.448125] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.551 [2024-06-10 11:48:37.448714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.551 [2024-06-10 11:48:37.448765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.551 [2024-06-10 11:48:37.448797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.551 [2024-06-10 11:48:37.449386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.551 [2024-06-10 11:48:37.449707] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.551 [2024-06-10 11:48:37.449722] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.551 [2024-06-10 11:48:37.449734] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.551 [2024-06-10 11:48:37.453463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.551 [2024-06-10 11:48:37.462275] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.551 [2024-06-10 11:48:37.462868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.551 [2024-06-10 11:48:37.462919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.551 [2024-06-10 11:48:37.462951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.551 [2024-06-10 11:48:37.463539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.551 [2024-06-10 11:48:37.463804] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.551 [2024-06-10 11:48:37.463819] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.551 [2024-06-10 11:48:37.463835] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.551 [2024-06-10 11:48:37.467566] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.551 [2024-06-10 11:48:37.476381] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.551 [2024-06-10 11:48:37.476979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.551 [2024-06-10 11:48:37.477030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.551 [2024-06-10 11:48:37.477062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.551 [2024-06-10 11:48:37.477662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.551 [2024-06-10 11:48:37.478145] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.551 [2024-06-10 11:48:37.478159] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.551 [2024-06-10 11:48:37.478172] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.551 [2024-06-10 11:48:37.481907] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.551 [2024-06-10 11:48:37.490491] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.551 [2024-06-10 11:48:37.490950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.551 [2024-06-10 11:48:37.491002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.551 [2024-06-10 11:48:37.491033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.551 [2024-06-10 11:48:37.491633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.551 [2024-06-10 11:48:37.492041] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.551 [2024-06-10 11:48:37.492056] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.551 [2024-06-10 11:48:37.492068] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.551 [2024-06-10 11:48:37.498086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.551 [2024-06-10 11:48:37.505492] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.551 [2024-06-10 11:48:37.506010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.551 [2024-06-10 11:48:37.506034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.551 [2024-06-10 11:48:37.506048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.551 [2024-06-10 11:48:37.506304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.551 [2024-06-10 11:48:37.506561] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.551 [2024-06-10 11:48:37.506582] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.551 [2024-06-10 11:48:37.506596] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.551 [2024-06-10 11:48:37.510653] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.551 [2024-06-10 11:48:37.519592] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.551 [2024-06-10 11:48:37.520161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.551 [2024-06-10 11:48:37.520218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.551 [2024-06-10 11:48:37.520251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.551 [2024-06-10 11:48:37.520791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.551 [2024-06-10 11:48:37.521028] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.551 [2024-06-10 11:48:37.521043] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.551 [2024-06-10 11:48:37.521055] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.551 [2024-06-10 11:48:37.524793] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.551 [2024-06-10 11:48:37.533619] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.551 [2024-06-10 11:48:37.534183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.551 [2024-06-10 11:48:37.534233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.551 [2024-06-10 11:48:37.534265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.551 [2024-06-10 11:48:37.534789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.551 [2024-06-10 11:48:37.535027] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.551 [2024-06-10 11:48:37.535041] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.551 [2024-06-10 11:48:37.535053] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.552 [2024-06-10 11:48:37.538790] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.552 [2024-06-10 11:48:37.547803] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.552 [2024-06-10 11:48:37.548309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.552 [2024-06-10 11:48:37.548332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.552 [2024-06-10 11:48:37.548345] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.552 [2024-06-10 11:48:37.548589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.552 [2024-06-10 11:48:37.548827] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.552 [2024-06-10 11:48:37.548841] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.552 [2024-06-10 11:48:37.548853] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.552 [2024-06-10 11:48:37.552586] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.552 [2024-06-10 11:48:37.561833] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.552 [2024-06-10 11:48:37.562414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.552 [2024-06-10 11:48:37.562436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.552 [2024-06-10 11:48:37.562449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.552 [2024-06-10 11:48:37.562692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.552 [2024-06-10 11:48:37.562932] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.552 [2024-06-10 11:48:37.562947] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.552 [2024-06-10 11:48:37.562959] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.552 [2024-06-10 11:48:37.566692] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.552 [2024-06-10 11:48:37.575947] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.552 [2024-06-10 11:48:37.576525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.552 [2024-06-10 11:48:37.576547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.552 [2024-06-10 11:48:37.576561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.552 [2024-06-10 11:48:37.576803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.552 [2024-06-10 11:48:37.577040] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.552 [2024-06-10 11:48:37.577054] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.552 [2024-06-10 11:48:37.577067] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.552 [2024-06-10 11:48:37.580800] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.552 [2024-06-10 11:48:37.590048] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.552 [2024-06-10 11:48:37.590559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.552 [2024-06-10 11:48:37.590587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.552 [2024-06-10 11:48:37.590601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.552 [2024-06-10 11:48:37.590836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.552 [2024-06-10 11:48:37.591073] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.552 [2024-06-10 11:48:37.591087] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.552 [2024-06-10 11:48:37.591100] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.552 [2024-06-10 11:48:37.594832] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.552 [2024-06-10 11:48:37.604074] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.552 [2024-06-10 11:48:37.604653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.552 [2024-06-10 11:48:37.604676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.552 [2024-06-10 11:48:37.604690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.552 [2024-06-10 11:48:37.604926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.552 [2024-06-10 11:48:37.605163] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.552 [2024-06-10 11:48:37.605177] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.552 [2024-06-10 11:48:37.605189] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.552 [2024-06-10 11:48:37.608930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.552 [2024-06-10 11:48:37.618177] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.552 [2024-06-10 11:48:37.618764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.552 [2024-06-10 11:48:37.618817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.552 [2024-06-10 11:48:37.618850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.552 [2024-06-10 11:48:37.619246] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.552 [2024-06-10 11:48:37.619483] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.552 [2024-06-10 11:48:37.619498] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.552 [2024-06-10 11:48:37.619510] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4168942 Killed "${NVMF_APP[@]}" "$@" 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:40:12.552 [2024-06-10 11:48:37.623250] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:12.552 [2024-06-10 11:48:37.632285] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=4170538 00:40:12.552 [2024-06-10 11:48:37.632781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 4170538 00:40:12.552 [2024-06-10 11:48:37.632803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.552 [2024-06-10 11:48:37.632816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:40:12.552 [2024-06-10 11:48:37.633052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 4170538 ']' 00:40:12.552 [2024-06-10 11:48:37.633289] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.552 [2024-06-10 11:48:37.633304] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.552 [2024-06-10 11:48:37.633316] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:12.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:12.552 11:48:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:12.552 [2024-06-10 11:48:37.637055] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.552 [2024-06-10 11:48:37.646311] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.552 [2024-06-10 11:48:37.646899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.552 [2024-06-10 11:48:37.646922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.552 [2024-06-10 11:48:37.646935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.552 [2024-06-10 11:48:37.647171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.552 [2024-06-10 11:48:37.647407] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.552 [2024-06-10 11:48:37.647421] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.552 [2024-06-10 11:48:37.647433] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.552 [2024-06-10 11:48:37.651162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.813 [2024-06-10 11:48:37.660406] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.813 [2024-06-10 11:48:37.660969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.813 [2024-06-10 11:48:37.660992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.813 [2024-06-10 11:48:37.661005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.813 [2024-06-10 11:48:37.661241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.813 [2024-06-10 11:48:37.661478] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.813 [2024-06-10 11:48:37.661492] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.813 [2024-06-10 11:48:37.661504] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.813 [2024-06-10 11:48:37.665236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.813 [2024-06-10 11:48:37.674488] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.813 [2024-06-10 11:48:37.675070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.813 [2024-06-10 11:48:37.675093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.813 [2024-06-10 11:48:37.675106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.813 [2024-06-10 11:48:37.675342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.813 [2024-06-10 11:48:37.675585] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.813 [2024-06-10 11:48:37.675599] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.813 [2024-06-10 11:48:37.675612] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.813 [2024-06-10 11:48:37.679340] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.813 [2024-06-10 11:48:37.684445] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:40:12.813 [2024-06-10 11:48:37.684499] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:12.813 [2024-06-10 11:48:37.688574] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.813 [2024-06-10 11:48:37.689164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.813 [2024-06-10 11:48:37.689186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.813 [2024-06-10 11:48:37.689199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.813 [2024-06-10 11:48:37.689434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.813 [2024-06-10 11:48:37.689677] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.813 [2024-06-10 11:48:37.689692] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.813 [2024-06-10 11:48:37.689705] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.813 [2024-06-10 11:48:37.693426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.813 [2024-06-10 11:48:37.702679] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.813 [2024-06-10 11:48:37.703190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.813 [2024-06-10 11:48:37.703211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.813 [2024-06-10 11:48:37.703224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.813 [2024-06-10 11:48:37.703460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.813 [2024-06-10 11:48:37.703703] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.813 [2024-06-10 11:48:37.703718] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.813 [2024-06-10 11:48:37.703731] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.813 [2024-06-10 11:48:37.707586] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.814 [2024-06-10 11:48:37.716828] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.814 [2024-06-10 11:48:37.717408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.814 [2024-06-10 11:48:37.717431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.814 [2024-06-10 11:48:37.717444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.814 [2024-06-10 11:48:37.717686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.814 [2024-06-10 11:48:37.717924] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.814 [2024-06-10 11:48:37.717938] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.814 [2024-06-10 11:48:37.717950] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.814 [2024-06-10 11:48:37.721675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.814 [2024-06-10 11:48:37.730902] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.814 [2024-06-10 11:48:37.731478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.814 [2024-06-10 11:48:37.731500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.814 [2024-06-10 11:48:37.731513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.814 [2024-06-10 11:48:37.731760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.814 [2024-06-10 11:48:37.731997] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.814 [2024-06-10 11:48:37.732012] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.814 [2024-06-10 11:48:37.732024] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.814 [2024-06-10 11:48:37.735748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.814 EAL: No free 2048 kB hugepages reported on node 1 00:40:12.814 [2024-06-10 11:48:37.744989] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.814 [2024-06-10 11:48:37.745571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.814 [2024-06-10 11:48:37.745599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.814 [2024-06-10 11:48:37.745612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.814 [2024-06-10 11:48:37.745848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.814 [2024-06-10 11:48:37.746085] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.814 [2024-06-10 11:48:37.746099] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.814 [2024-06-10 11:48:37.746112] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.814 [2024-06-10 11:48:37.749853] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.814 [2024-06-10 11:48:37.759084] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.814 [2024-06-10 11:48:37.759666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.814 [2024-06-10 11:48:37.759688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.814 [2024-06-10 11:48:37.759701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.814 [2024-06-10 11:48:37.759938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.814 [2024-06-10 11:48:37.760174] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.814 [2024-06-10 11:48:37.760188] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.814 [2024-06-10 11:48:37.760201] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.814 [2024-06-10 11:48:37.763930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.814 [2024-06-10 11:48:37.773170] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.814 [2024-06-10 11:48:37.773746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.814 [2024-06-10 11:48:37.773769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.814 [2024-06-10 11:48:37.773782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.814 [2024-06-10 11:48:37.774018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.814 [2024-06-10 11:48:37.774254] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.814 [2024-06-10 11:48:37.774268] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.814 [2024-06-10 11:48:37.774284] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.814 [2024-06-10 11:48:37.778025] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.814 [2024-06-10 11:48:37.787256] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.814 [2024-06-10 11:48:37.787832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.814 [2024-06-10 11:48:37.787855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.814 [2024-06-10 11:48:37.787869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.814 [2024-06-10 11:48:37.788105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.814 [2024-06-10 11:48:37.788342] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.814 [2024-06-10 11:48:37.788357] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.814 [2024-06-10 11:48:37.788369] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.814 [2024-06-10 11:48:37.792099] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.814 [2024-06-10 11:48:37.801347] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.814 [2024-06-10 11:48:37.801942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.814 [2024-06-10 11:48:37.801965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.814 [2024-06-10 11:48:37.801978] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.814 [2024-06-10 11:48:37.802214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.814 [2024-06-10 11:48:37.802450] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.814 [2024-06-10 11:48:37.802464] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.814 [2024-06-10 11:48:37.802477] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.814 [2024-06-10 11:48:37.803311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:12.814 [2024-06-10 11:48:37.806209] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.814 [2024-06-10 11:48:37.815448] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.814 [2024-06-10 11:48:37.816046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.814 [2024-06-10 11:48:37.816071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.814 [2024-06-10 11:48:37.816085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.814 [2024-06-10 11:48:37.816320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.814 [2024-06-10 11:48:37.816558] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.814 [2024-06-10 11:48:37.816572] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.814 [2024-06-10 11:48:37.816591] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.814 [2024-06-10 11:48:37.820321] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.814 [2024-06-10 11:48:37.829561] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.814 [2024-06-10 11:48:37.830150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.814 [2024-06-10 11:48:37.830172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.814 [2024-06-10 11:48:37.830186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.814 [2024-06-10 11:48:37.830422] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.814 [2024-06-10 11:48:37.830665] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.814 [2024-06-10 11:48:37.830679] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.814 [2024-06-10 11:48:37.830692] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.814 [2024-06-10 11:48:37.834420] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.814 [2024-06-10 11:48:37.843667] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.814 [2024-06-10 11:48:37.844111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.814 [2024-06-10 11:48:37.844134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.814 [2024-06-10 11:48:37.844147] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.814 [2024-06-10 11:48:37.844382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.814 [2024-06-10 11:48:37.844625] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.814 [2024-06-10 11:48:37.844640] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.814 [2024-06-10 11:48:37.844652] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.815 [2024-06-10 11:48:37.848382] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.815 [2024-06-10 11:48:37.857850] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.815 [2024-06-10 11:48:37.858449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.815 [2024-06-10 11:48:37.858474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.815 [2024-06-10 11:48:37.858488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.815 [2024-06-10 11:48:37.858731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.815 [2024-06-10 11:48:37.858968] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.815 [2024-06-10 11:48:37.858982] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.815 [2024-06-10 11:48:37.858995] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.815 [2024-06-10 11:48:37.862727] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.815 [2024-06-10 11:48:37.871960] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.815 [2024-06-10 11:48:37.872541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.815 [2024-06-10 11:48:37.872563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.815 [2024-06-10 11:48:37.872581] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.815 [2024-06-10 11:48:37.872822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.815 [2024-06-10 11:48:37.873061] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.815 [2024-06-10 11:48:37.873075] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.815 [2024-06-10 11:48:37.873088] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.815 [2024-06-10 11:48:37.876830] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.815 [2024-06-10 11:48:37.886059] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.815 [2024-06-10 11:48:37.886644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.815 [2024-06-10 11:48:37.886667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.815 [2024-06-10 11:48:37.886680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.815 [2024-06-10 11:48:37.886916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.815 [2024-06-10 11:48:37.887152] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.815 [2024-06-10 11:48:37.887166] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.815 [2024-06-10 11:48:37.887179] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.815 [2024-06-10 11:48:37.889649] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:12.815 [2024-06-10 11:48:37.889681] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:12.815 [2024-06-10 11:48:37.889694] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:12.815 [2024-06-10 11:48:37.889706] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:12.815 [2024-06-10 11:48:37.889715] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:12.815 [2024-06-10 11:48:37.889763] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:40:12.815 [2024-06-10 11:48:37.889894] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:40:12.815 [2024-06-10 11:48:37.889894] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:40:12.815 [2024-06-10 11:48:37.890912] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.815 [2024-06-10 11:48:37.900154] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.815 [2024-06-10 11:48:37.900752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.815 [2024-06-10 11:48:37.900778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.815 [2024-06-10 11:48:37.900792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.815 [2024-06-10 11:48:37.901030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.815 [2024-06-10 11:48:37.901266] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.815 [2024-06-10 11:48:37.901280] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.815 [2024-06-10 11:48:37.901293] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:12.815 [2024-06-10 11:48:37.905026] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:12.815 [2024-06-10 11:48:37.914276] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:12.815 [2024-06-10 11:48:37.914865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:12.815 [2024-06-10 11:48:37.914891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:12.815 [2024-06-10 11:48:37.914905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:12.815 [2024-06-10 11:48:37.915141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:12.815 [2024-06-10 11:48:37.915379] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:12.815 [2024-06-10 11:48:37.915393] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:12.815 [2024-06-10 11:48:37.915405] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.075 [2024-06-10 11:48:37.919136] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.075 [2024-06-10 11:48:37.928374] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.075 [2024-06-10 11:48:37.928976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.075 [2024-06-10 11:48:37.929001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.075 [2024-06-10 11:48:37.929015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.075 [2024-06-10 11:48:37.929252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.075 [2024-06-10 11:48:37.929488] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.075 [2024-06-10 11:48:37.929503] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.075 [2024-06-10 11:48:37.929515] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.075 [2024-06-10 11:48:37.933266] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.075 [2024-06-10 11:48:37.942518] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.075 [2024-06-10 11:48:37.943100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.075 [2024-06-10 11:48:37.943125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.075 [2024-06-10 11:48:37.943139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.075 [2024-06-10 11:48:37.943377] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.075 [2024-06-10 11:48:37.943621] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.075 [2024-06-10 11:48:37.943636] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.075 [2024-06-10 11:48:37.943649] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.075 [2024-06-10 11:48:37.947372] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.075 [2024-06-10 11:48:37.956608] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.075 [2024-06-10 11:48:37.957194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.075 [2024-06-10 11:48:37.957217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.075 [2024-06-10 11:48:37.957230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.075 [2024-06-10 11:48:37.957473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.075 [2024-06-10 11:48:37.957719] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.075 [2024-06-10 11:48:37.957734] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.075 [2024-06-10 11:48:37.957747] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.075 [2024-06-10 11:48:37.961475] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.075 [2024-06-10 11:48:37.970801] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.075 [2024-06-10 11:48:37.971370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.075 [2024-06-10 11:48:37.971393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.075 [2024-06-10 11:48:37.971407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.075 [2024-06-10 11:48:37.971650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.075 [2024-06-10 11:48:37.971888] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.075 [2024-06-10 11:48:37.971902] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.075 [2024-06-10 11:48:37.971915] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.075 [2024-06-10 11:48:37.975652] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:37.984887] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:37.985442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:37.985465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:37.985478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:37.985721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:37.985959] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:37.985973] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:37.985986] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:37.989707] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:37.998940] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:37.999385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:37.999407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:37.999420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:37.999662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:37.999899] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:37.999914] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:37.999929] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:38.003654] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:38.013120] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:38.013561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:38.013588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:38.013601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:38.013838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:38.014075] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:38.014089] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:38.014101] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:38.017825] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:38.027278] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:38.027821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:38.027844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:38.027857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:38.028094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:38.028331] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:38.028345] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:38.028357] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:38.032088] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:38.041312] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:38.041894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:38.041917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:38.041930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:38.042165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:38.042403] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:38.042417] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:38.042429] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:38.046155] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:38.055383] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:38.055962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:38.055985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:38.055998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:38.056234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:38.056471] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:38.056486] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:38.056498] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:38.060226] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:38.069457] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:38.069969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:38.069992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:38.070005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:38.070241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:38.070478] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:38.070492] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:38.070504] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:38.074232] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:38.083466] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:38.083972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:38.083995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:38.084008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:38.084245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:38.084482] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:38.084496] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:38.084508] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:38.088239] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:38.097478] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:38.098064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:38.098086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:38.098100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:38.098336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:38.098583] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:38.098598] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:38.098610] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:38.102327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:38.111560] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:38.112104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:38.112126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:38.112139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:38.112375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:38.112618] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:38.112633] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:38.112645] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:38.116364] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:38.125593] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:38.126159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:38.126181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:38.126194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:38.126430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:38.126673] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:38.126687] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:38.126700] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:38.130420] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:38.139649] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:38.140145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:38.140167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:38.140180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:38.140416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:38.140661] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:38.140675] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:38.140688] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:38.144416] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:38.153656] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:38.154237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:38.154259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:38.154272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:38.154508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:38.154750] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:38.154764] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:38.154776] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:38.158500] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.076 [2024-06-10 11:48:38.167728] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.076 [2024-06-10 11:48:38.168308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.076 [2024-06-10 11:48:38.168330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.076 [2024-06-10 11:48:38.168343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.076 [2024-06-10 11:48:38.168585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.076 [2024-06-10 11:48:38.168822] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.076 [2024-06-10 11:48:38.168836] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.076 [2024-06-10 11:48:38.168849] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.076 [2024-06-10 11:48:38.172580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.336 [2024-06-10 11:48:38.181820] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.336 [2024-06-10 11:48:38.182371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.336 [2024-06-10 11:48:38.182393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.336 [2024-06-10 11:48:38.182407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.336 [2024-06-10 11:48:38.182649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.336 [2024-06-10 11:48:38.182888] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.336 [2024-06-10 11:48:38.182902] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.336 [2024-06-10 11:48:38.182915] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.336 [2024-06-10 11:48:38.186637] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.336 [2024-06-10 11:48:38.195861] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.336 [2024-06-10 11:48:38.196434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.336 [2024-06-10 11:48:38.196459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.336 [2024-06-10 11:48:38.196473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.336 [2024-06-10 11:48:38.196714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.336 [2024-06-10 11:48:38.196952] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.336 [2024-06-10 11:48:38.196966] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.336 [2024-06-10 11:48:38.196979] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.336 [2024-06-10 11:48:38.200707] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.336 [2024-06-10 11:48:38.209937] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.336 [2024-06-10 11:48:38.210513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.336 [2024-06-10 11:48:38.210535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.336 [2024-06-10 11:48:38.210549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.336 [2024-06-10 11:48:38.210790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.336 [2024-06-10 11:48:38.211028] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.336 [2024-06-10 11:48:38.211042] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.336 [2024-06-10 11:48:38.211054] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.336 [2024-06-10 11:48:38.214783] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.336 [2024-06-10 11:48:38.224051] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.336 [2024-06-10 11:48:38.224629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.336 [2024-06-10 11:48:38.224652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.336 [2024-06-10 11:48:38.224665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.336 [2024-06-10 11:48:38.224903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.336 [2024-06-10 11:48:38.225141] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.336 [2024-06-10 11:48:38.225155] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.336 [2024-06-10 11:48:38.225167] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.336 [2024-06-10 11:48:38.228899] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.336 [2024-06-10 11:48:38.238131] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.336 [2024-06-10 11:48:38.238712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.336 [2024-06-10 11:48:38.238734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.336 [2024-06-10 11:48:38.238747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.336 [2024-06-10 11:48:38.238984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.336 [2024-06-10 11:48:38.239225] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.336 [2024-06-10 11:48:38.239239] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.336 [2024-06-10 11:48:38.239252] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.336 [2024-06-10 11:48:38.242983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.336 [2024-06-10 11:48:38.252209] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.336 [2024-06-10 11:48:38.252793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.336 [2024-06-10 11:48:38.252815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.336 [2024-06-10 11:48:38.252828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.336 [2024-06-10 11:48:38.253064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.336 [2024-06-10 11:48:38.253301] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.336 [2024-06-10 11:48:38.253315] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.336 [2024-06-10 11:48:38.253328] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.336 [2024-06-10 11:48:38.257059] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.336 [2024-06-10 11:48:38.266293] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.337 [2024-06-10 11:48:38.266875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.337 [2024-06-10 11:48:38.266897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.337 [2024-06-10 11:48:38.266911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.337 [2024-06-10 11:48:38.267147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.337 [2024-06-10 11:48:38.267383] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.337 [2024-06-10 11:48:38.267397] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.337 [2024-06-10 11:48:38.267410] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.337 [2024-06-10 11:48:38.271134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.337 [2024-06-10 11:48:38.280379] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.337 [2024-06-10 11:48:38.280965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.337 [2024-06-10 11:48:38.280988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.337 [2024-06-10 11:48:38.281001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.337 [2024-06-10 11:48:38.281236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.337 [2024-06-10 11:48:38.281473] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.337 [2024-06-10 11:48:38.281487] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.337 [2024-06-10 11:48:38.281499] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.337 [2024-06-10 11:48:38.285230] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.337 [2024-06-10 11:48:38.294480] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.337 [2024-06-10 11:48:38.294988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.337 [2024-06-10 11:48:38.295011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.337 [2024-06-10 11:48:38.295024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.337 [2024-06-10 11:48:38.295260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.337 [2024-06-10 11:48:38.295496] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.337 [2024-06-10 11:48:38.295510] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.337 [2024-06-10 11:48:38.295523] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.337 [2024-06-10 11:48:38.299254] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.337 [2024-06-10 11:48:38.308492] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.337 [2024-06-10 11:48:38.309068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.337 [2024-06-10 11:48:38.309091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.337 [2024-06-10 11:48:38.309104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.337 [2024-06-10 11:48:38.309340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.337 [2024-06-10 11:48:38.309583] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.337 [2024-06-10 11:48:38.309598] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.337 [2024-06-10 11:48:38.309611] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.337 [2024-06-10 11:48:38.313334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.337 [2024-06-10 11:48:38.322569] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.337 [2024-06-10 11:48:38.323152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.337 [2024-06-10 11:48:38.323173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.337 [2024-06-10 11:48:38.323186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.337 [2024-06-10 11:48:38.323423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.337 [2024-06-10 11:48:38.323664] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.337 [2024-06-10 11:48:38.323678] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.337 [2024-06-10 11:48:38.323690] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.337 [2024-06-10 11:48:38.327419] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.337 [2024-06-10 11:48:38.336642] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.337 [2024-06-10 11:48:38.337201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.337 [2024-06-10 11:48:38.337223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.337 [2024-06-10 11:48:38.337240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.337 [2024-06-10 11:48:38.337475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.337 [2024-06-10 11:48:38.337716] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.337 [2024-06-10 11:48:38.337731] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.337 [2024-06-10 11:48:38.337743] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.337 [2024-06-10 11:48:38.341468] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.337 [2024-06-10 11:48:38.350704] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.337 [2024-06-10 11:48:38.351281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.337 [2024-06-10 11:48:38.351304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.337 [2024-06-10 11:48:38.351317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.337 [2024-06-10 11:48:38.351554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.337 [2024-06-10 11:48:38.351796] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.337 [2024-06-10 11:48:38.351810] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.337 [2024-06-10 11:48:38.351823] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.337 [2024-06-10 11:48:38.355549] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.337 [2024-06-10 11:48:38.364782] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.337 [2024-06-10 11:48:38.365219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.337 [2024-06-10 11:48:38.365241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.337 [2024-06-10 11:48:38.365254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.337 [2024-06-10 11:48:38.365489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.337 [2024-06-10 11:48:38.365733] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.337 [2024-06-10 11:48:38.365748] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.337 [2024-06-10 11:48:38.365760] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.337 [2024-06-10 11:48:38.369481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.337 [2024-06-10 11:48:38.378933] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.337 [2024-06-10 11:48:38.379489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.337 [2024-06-10 11:48:38.379511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.337 [2024-06-10 11:48:38.379523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.337 [2024-06-10 11:48:38.379764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.337 [2024-06-10 11:48:38.380002] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.337 [2024-06-10 11:48:38.380020] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.337 [2024-06-10 11:48:38.380032] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.337 [2024-06-10 11:48:38.383760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.337 [2024-06-10 11:48:38.392995] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.337 [2024-06-10 11:48:38.393505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.337 [2024-06-10 11:48:38.393528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.337 [2024-06-10 11:48:38.393541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.337 [2024-06-10 11:48:38.393783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.337 [2024-06-10 11:48:38.394022] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.337 [2024-06-10 11:48:38.394036] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.337 [2024-06-10 11:48:38.394049] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.337 [2024-06-10 11:48:38.397778] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.337 [2024-06-10 11:48:38.407313] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.337 [2024-06-10 11:48:38.407820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.338 [2024-06-10 11:48:38.407844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.338 [2024-06-10 11:48:38.407859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.338 [2024-06-10 11:48:38.408095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.338 [2024-06-10 11:48:38.408333] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.338 [2024-06-10 11:48:38.408348] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.338 [2024-06-10 11:48:38.408360] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.338 [2024-06-10 11:48:38.412089] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.338 [2024-06-10 11:48:38.421347] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.338 [2024-06-10 11:48:38.421855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.338 [2024-06-10 11:48:38.421879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.338 [2024-06-10 11:48:38.421893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.338 [2024-06-10 11:48:38.422128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.338 [2024-06-10 11:48:38.422366] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.338 [2024-06-10 11:48:38.422380] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.338 [2024-06-10 11:48:38.422394] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.338 [2024-06-10 11:48:38.426127] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.338 [2024-06-10 11:48:38.435359] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.338 [2024-06-10 11:48:38.435929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.338 [2024-06-10 11:48:38.435951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.338 [2024-06-10 11:48:38.435964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.338 [2024-06-10 11:48:38.436199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.338 [2024-06-10 11:48:38.436437] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.338 [2024-06-10 11:48:38.436451] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.338 [2024-06-10 11:48:38.436464] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.598 [2024-06-10 11:48:38.440212] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.598 [2024-06-10 11:48:38.449443] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.598 [2024-06-10 11:48:38.449960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.598 [2024-06-10 11:48:38.449982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.598 [2024-06-10 11:48:38.449995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.598 [2024-06-10 11:48:38.450231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.598 [2024-06-10 11:48:38.450468] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.598 [2024-06-10 11:48:38.450482] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.598 [2024-06-10 11:48:38.450495] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.598 [2024-06-10 11:48:38.454224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.598 [2024-06-10 11:48:38.463468] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.598 [2024-06-10 11:48:38.463937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.598 [2024-06-10 11:48:38.463959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.598 [2024-06-10 11:48:38.463972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.598 [2024-06-10 11:48:38.464209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.598 [2024-06-10 11:48:38.464447] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.598 [2024-06-10 11:48:38.464461] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.598 [2024-06-10 11:48:38.464473] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.598 [2024-06-10 11:48:38.468199] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.598 [2024-06-10 11:48:38.477678] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.598 [2024-06-10 11:48:38.478122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.598 [2024-06-10 11:48:38.478145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.598 [2024-06-10 11:48:38.478158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.598 [2024-06-10 11:48:38.478398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.598 [2024-06-10 11:48:38.478643] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.598 [2024-06-10 11:48:38.478658] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.598 [2024-06-10 11:48:38.478671] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.598 [2024-06-10 11:48:38.482400] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.598 [2024-06-10 11:48:38.491863] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.598 [2024-06-10 11:48:38.492443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.598 [2024-06-10 11:48:38.492466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.598 [2024-06-10 11:48:38.492479] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.598 [2024-06-10 11:48:38.492723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.598 [2024-06-10 11:48:38.492962] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.598 [2024-06-10 11:48:38.492977] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.598 [2024-06-10 11:48:38.492989] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.598 [2024-06-10 11:48:38.496717] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.598 [2024-06-10 11:48:38.505957] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.598 [2024-06-10 11:48:38.506544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.598 [2024-06-10 11:48:38.506566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.598 [2024-06-10 11:48:38.506585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.598 [2024-06-10 11:48:38.506822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.598 [2024-06-10 11:48:38.507059] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.598 [2024-06-10 11:48:38.507073] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.598 [2024-06-10 11:48:38.507085] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.598 [2024-06-10 11:48:38.510834] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.598 [2024-06-10 11:48:38.520078] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.598 [2024-06-10 11:48:38.520635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.598 [2024-06-10 11:48:38.520657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.598 [2024-06-10 11:48:38.520670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.598 [2024-06-10 11:48:38.520907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.598 [2024-06-10 11:48:38.521143] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.598 [2024-06-10 11:48:38.521158] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.598 [2024-06-10 11:48:38.521174] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.598 [2024-06-10 11:48:38.524907] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.598 [2024-06-10 11:48:38.534155] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.598 [2024-06-10 11:48:38.534657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.598 [2024-06-10 11:48:38.534680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.598 [2024-06-10 11:48:38.534693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.598 [2024-06-10 11:48:38.534929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.598 [2024-06-10 11:48:38.535166] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.599 [2024-06-10 11:48:38.535180] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.599 [2024-06-10 11:48:38.535192] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.599 [2024-06-10 11:48:38.538925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.599 [2024-06-10 11:48:38.548163] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.599 [2024-06-10 11:48:38.548727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.599 [2024-06-10 11:48:38.548749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.599 [2024-06-10 11:48:38.548762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.599 [2024-06-10 11:48:38.548999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.599 [2024-06-10 11:48:38.549236] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.599 [2024-06-10 11:48:38.549250] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.599 [2024-06-10 11:48:38.549263] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.599 [2024-06-10 11:48:38.552996] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.599 [2024-06-10 11:48:38.562234] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.599 [2024-06-10 11:48:38.562825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.599 [2024-06-10 11:48:38.562848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.599 [2024-06-10 11:48:38.562862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.599 [2024-06-10 11:48:38.563097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.599 [2024-06-10 11:48:38.563334] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.599 [2024-06-10 11:48:38.563348] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.599 [2024-06-10 11:48:38.563362] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.599 [2024-06-10 11:48:38.567095] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.599 [2024-06-10 11:48:38.576339] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.599 [2024-06-10 11:48:38.576903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.599 [2024-06-10 11:48:38.576929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.599 [2024-06-10 11:48:38.576943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.599 [2024-06-10 11:48:38.577179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.599 [2024-06-10 11:48:38.577416] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.599 [2024-06-10 11:48:38.577430] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.599 [2024-06-10 11:48:38.577443] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.599 [2024-06-10 11:48:38.581170] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.599 [2024-06-10 11:48:38.590408] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.599 [2024-06-10 11:48:38.590972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.599 [2024-06-10 11:48:38.590994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.599 [2024-06-10 11:48:38.591007] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.599 [2024-06-10 11:48:38.591242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.599 [2024-06-10 11:48:38.591479] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.599 [2024-06-10 11:48:38.591493] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.599 [2024-06-10 11:48:38.591505] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.599 [2024-06-10 11:48:38.595230] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.599 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:13.599 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:40:13.599 11:48:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:13.599 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:13.599 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:13.599 [2024-06-10 11:48:38.604465] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.599 [2024-06-10 11:48:38.605078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.599 [2024-06-10 11:48:38.605101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.599 [2024-06-10 11:48:38.605114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.599 [2024-06-10 11:48:38.605349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.599 [2024-06-10 11:48:38.605592] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.599 [2024-06-10 11:48:38.605609] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.599 [2024-06-10 11:48:38.605624] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.599 [2024-06-10 11:48:38.609351] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.599 [2024-06-10 11:48:38.618593] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.599 [2024-06-10 11:48:38.619155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.599 [2024-06-10 11:48:38.619184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.599 [2024-06-10 11:48:38.619197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.599 [2024-06-10 11:48:38.619435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.599 [2024-06-10 11:48:38.619679] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.599 [2024-06-10 11:48:38.619702] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.599 [2024-06-10 11:48:38.619714] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.599 [2024-06-10 11:48:38.623435] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.599 [2024-06-10 11:48:38.632687] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.599 [2024-06-10 11:48:38.633173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.599 [2024-06-10 11:48:38.633195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.599 [2024-06-10 11:48:38.633208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.599 [2024-06-10 11:48:38.633443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.599 [2024-06-10 11:48:38.633686] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.599 [2024-06-10 11:48:38.633703] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.599 [2024-06-10 11:48:38.633716] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.599 [2024-06-10 11:48:38.637445] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.599 11:48:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:13.599 11:48:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:13.599 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.599 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:13.599 [2024-06-10 11:48:38.646061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:13.599 [2024-06-10 11:48:38.646696] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.599 [2024-06-10 11:48:38.647184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.599 [2024-06-10 11:48:38.647207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.599 [2024-06-10 11:48:38.647221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.599 [2024-06-10 11:48:38.647457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.599 [2024-06-10 11:48:38.647702] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.599 [2024-06-10 11:48:38.647716] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.599 [2024-06-10 11:48:38.647729] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.599 [2024-06-10 11:48:38.651454] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.599 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.599 11:48:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:13.599 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.599 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:13.599 [2024-06-10 11:48:38.660721] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.599 [2024-06-10 11:48:38.661228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.599 [2024-06-10 11:48:38.661251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.599 [2024-06-10 11:48:38.661264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.599 [2024-06-10 11:48:38.661500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.599 [2024-06-10 11:48:38.661743] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.600 [2024-06-10 11:48:38.661758] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.600 [2024-06-10 11:48:38.661770] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.600 [2024-06-10 11:48:38.665497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.600 [2024-06-10 11:48:38.674750] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.600 [2024-06-10 11:48:38.675316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.600 [2024-06-10 11:48:38.675338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.600 [2024-06-10 11:48:38.675351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.600 [2024-06-10 11:48:38.675592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.600 [2024-06-10 11:48:38.675830] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.600 [2024-06-10 11:48:38.675844] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.600 [2024-06-10 11:48:38.675857] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.600 [2024-06-10 11:48:38.679588] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.600 [2024-06-10 11:48:38.688833] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.600 [2024-06-10 11:48:38.689394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.600 [2024-06-10 11:48:38.689417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.600 [2024-06-10 11:48:38.689430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.600 [2024-06-10 11:48:38.689672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.600 [2024-06-10 11:48:38.689910] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.600 [2024-06-10 11:48:38.689924] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.600 [2024-06-10 11:48:38.689937] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.600 [2024-06-10 11:48:38.693672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.600 Malloc0 00:40:13.600 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.600 11:48:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:13.600 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.600 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:13.859 [2024-06-10 11:48:38.702923] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.859 [2024-06-10 11:48:38.703502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.859 [2024-06-10 11:48:38.703524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.859 [2024-06-10 11:48:38.703537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.859 [2024-06-10 11:48:38.703779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.859 [2024-06-10 11:48:38.704016] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.859 [2024-06-10 11:48:38.704030] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.859 [2024-06-10 11:48:38.704043] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.859 [2024-06-10 11:48:38.707777] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.859 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.859 11:48:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:13.859 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.859 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:13.859 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.859 11:48:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:13.859 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:13.859 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:13.859 [2024-06-10 11:48:38.717014] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.859 [2024-06-10 11:48:38.717597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:13.859 [2024-06-10 11:48:38.717621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e96820 with addr=10.0.0.2, port=4420 00:40:13.859 [2024-06-10 11:48:38.717635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96820 is same with the state(5) to be set 00:40:13.859 [2024-06-10 11:48:38.717871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e96820 (9): Bad file descriptor 00:40:13.859 [2024-06-10 11:48:38.718108] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:13.859 [2024-06-10 11:48:38.718122] nvme_ctrlr.c:1804:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:13.859 [2024-06-10 11:48:38.718134] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:13.859 [2024-06-10 11:48:38.719702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:13.859 [2024-06-10 11:48:38.721861] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:13.859 11:48:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:13.859 11:48:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4169475 00:40:13.859 [2024-06-10 11:48:38.731272] nvme_ctrlr.c:1706:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:13.859 [2024-06-10 11:48:38.894070] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:23.841 00:40:23.841 Latency(us) 00:40:23.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:23.841 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:23.841 Verification LBA range: start 0x0 length 0x4000 00:40:23.841 Nvme1n1 : 15.01 6066.18 23.70 9757.91 0.00 8062.47 832.31 20971.52 00:40:23.841 =================================================================================================================== 00:40:23.841 Total : 6066.18 23.70 9757.91 0.00 8062.47 832.31 20971.52 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:23.841 rmmod nvme_tcp 00:40:23.841 rmmod nvme_fabrics 00:40:23.841 rmmod nvme_keyring 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 4170538 ']' 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 4170538 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 4170538 ']' 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 4170538 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4170538 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4170538' 00:40:23.841 killing process with pid 4170538 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 4170538 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 4170538 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:23.841 11:48:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:24.778 11:48:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:24.778 00:40:24.778 real 0m29.656s 00:40:24.778 user 1m3.469s 00:40:24.778 sys 0m9.760s 00:40:24.778 11:48:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:24.778 11:48:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:24.778 ************************************ 00:40:24.778 END TEST nvmf_bdevperf 00:40:24.778 ************************************ 00:40:24.778 11:48:49 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:40:24.778 11:48:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:40:24.778 11:48:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:24.778 11:48:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:24.778 ************************************ 00:40:24.778 START TEST nvmf_target_disconnect 00:40:24.778 ************************************ 00:40:24.778 11:48:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:40:25.037 * Looking for test storage... 00:40:25.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:40:25.037 11:48:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:35.022 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:35.022 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:35.022 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:35.023 Found net devices under 0000:af:00.0: cvl_0_0 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:35.023 Found net devices under 0000:af:00.1: cvl_0_1 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:35.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:35.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:40:35.023 00:40:35.023 --- 10.0.0.2 ping statistics --- 00:40:35.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.023 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:35.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:35.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:40:35.023 00:40:35.023 --- 10.0.0.1 ping statistics --- 00:40:35.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:35.023 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:35.023 ************************************ 00:40:35.023 START TEST nvmf_target_disconnect_tc1 00:40:35.023 ************************************ 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:35.023 EAL: No free 2048 kB hugepages reported on node 1 00:40:35.023 [2024-06-10 11:48:58.864753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:35.023 [2024-06-10 11:48:58.864873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2328ec0 with addr=10.0.0.2, port=4420 00:40:35.023 [2024-06-10 11:48:58.864930] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:40:35.023 [2024-06-10 11:48:58.864965] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:35.023 [2024-06-10 11:48:58.864992] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:40:35.023 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:40:35.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:40:35.023 Initializing NVMe Controllers 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:35.023 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:35.024 00:40:35.024 real 0m0.171s 00:40:35.024 user 0m0.057s 00:40:35.024 sys 0m0.113s 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:40:35.024 ************************************ 00:40:35.024 END TEST nvmf_target_disconnect_tc1 00:40:35.024 ************************************ 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:35.024 ************************************ 00:40:35.024 START TEST nvmf_target_disconnect_tc2 00:40:35.024 ************************************ 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4176604 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4176604 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 4176604 ']' 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:35.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:35.024 11:48:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:35.024 [2024-06-10 11:48:59.023068] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:40:35.024 [2024-06-10 11:48:59.023125] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:35.024 EAL: No free 2048 kB hugepages reported on node 1 00:40:35.024 [2024-06-10 11:48:59.148848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:35.024 [2024-06-10 11:48:59.234204] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:35.024 [2024-06-10 11:48:59.234248] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:35.024 [2024-06-10 11:48:59.234262] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:35.024 [2024-06-10 11:48:59.234274] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:35.024 [2024-06-10 11:48:59.234284] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:35.024 [2024-06-10 11:48:59.234427] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:40:35.024 [2024-06-10 11:48:59.234543] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:40:35.024 [2024-06-10 11:48:59.234654] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:40:35.024 [2024-06-10 11:48:59.234655] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:40:35.024 11:48:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:35.024 11:48:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:40:35.024 11:48:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:35.024 11:48:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:35.024 11:48:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:35.024 11:48:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:35.024 11:48:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:35.024 11:48:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:35.024 11:48:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:35.024 Malloc0 00:40:35.024 11:48:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:35.024 11:48:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:40:35.024 11:48:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:35.024 11:48:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:35.024 [2024-06-10 11:49:00.003907] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:35.024 [2024-06-10 11:49:00.036202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=4176737 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:40:35.024 11:49:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:35.282 EAL: No free 2048 kB hugepages reported on node 1 00:40:37.187 11:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 4176604 00:40:37.187 11:49:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 [2024-06-10 11:49:02.068151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 [2024-06-10 11:49:02.068461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 [2024-06-10 11:49:02.068778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Read completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 Write completed with error (sct=0, sc=8) 00:40:37.187 starting I/O failed 00:40:37.187 [2024-06-10 11:49:02.069077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:37.187 [2024-06-10 11:49:02.069454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.187 [2024-06-10 11:49:02.069474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.187 qpair failed and we were unable to recover it. 00:40:37.187 [2024-06-10 11:49:02.069752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.187 [2024-06-10 11:49:02.069804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.187 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.070088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.070128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.070363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.070403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.070759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.070800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.071098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.071138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.071474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.071514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.071905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.071946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.072298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.072337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.072663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.072705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.072993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.073007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.073229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.073242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.073414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.073427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.073693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.073734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.074088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.074128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.074451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.074493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.074737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.074750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.074962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.075002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.075278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.075319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.075678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.075718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.076014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.076054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.076429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.076469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.076794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.076808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.077117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.077160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.077513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.077554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.077768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.077781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.078088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.078102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.078350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.078364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.078666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.078679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.078854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.078894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.079170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.079210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.079562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.079612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.079861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.079901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.080244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.080283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.080571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.080624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.080921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.080961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.081357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.081397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.081673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.081686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.081850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.081864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.188 [2024-06-10 11:49:02.082177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.188 [2024-06-10 11:49:02.082191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.188 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.082512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.082525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.082879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.082895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.083206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.083219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.083407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.083421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.083735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.083776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.084086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.084126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.084513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.084553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.084930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.084971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.085350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.085390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.085774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.085816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.086111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.086150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.086497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.086537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.086930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.086970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.087343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.087383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.087727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.087767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.088097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.088137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.088437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.088477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.088833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.088874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.089230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.089269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.089634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.089676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.089939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.089952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.090174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.090186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.090477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.090490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.090808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.090848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.091215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.091255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.091632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.091673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.092011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.092024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.092335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.092374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.092672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.092713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.093083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.093123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.093424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.093465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.093709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.093722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.094005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.094018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.094231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.094244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.094552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.094603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.094900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.094941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.095288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.095328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.189 qpair failed and we were unable to recover it. 00:40:37.189 [2024-06-10 11:49:02.095718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.189 [2024-06-10 11:49:02.095758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.096074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.096087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.096322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.096335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.096647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.096660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.096945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.096991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.097295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.097335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.097702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.097744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.098055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.098068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.098403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.098443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.098794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.098835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.099180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.099220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.099607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.099648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.099904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.099917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.100247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.100288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.100592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.100633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.100934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.100947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.101193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.101252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.101548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.101598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.101975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.102022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.102301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.102341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.102701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.102743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.103033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.103047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.103273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.103286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.103555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.103604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.103974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.104014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.104318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.104358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.104728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.104769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.104998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.105038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.105342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.105382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.105750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.105792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.106095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.106135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.106506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.106546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.106917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.106958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.107273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.107313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.107657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.107698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.108063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.108104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.108489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.108529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.108847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.108860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.190 [2024-06-10 11:49:02.109150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.190 [2024-06-10 11:49:02.109190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.190 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.109486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.109526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.109892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.109933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.110301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.110341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.110715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.110755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.111025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.111038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.111279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.111294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.111629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.111669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.112013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.112053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.112441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.112482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.112822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.112835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.113069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.113109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.113478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.113517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.113866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.113906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.114213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.114253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.114547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.114598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.114879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.114919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.115261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.115302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.115666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.115707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.116022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.116062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.116342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.116383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.116749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.116790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.117027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.117067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.117459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.117498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.117820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.117861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.118214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.118254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.118609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.118650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.119041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.119081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.119447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.191 [2024-06-10 11:49:02.119488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.191 qpair failed and we were unable to recover it. 00:40:37.191 [2024-06-10 11:49:02.119848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.119889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.120190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.120230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.120574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.120633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.121005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.121045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.121418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.121458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.121822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.121836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.122049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.122063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.122285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.122298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.122615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.122655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.123019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.123059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.123355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.123395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.123747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.123808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.124156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.124196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.124573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.124616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.124849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.124873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.125197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.125238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.125626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.125667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.126038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.126083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.126452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.126492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.126767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.126807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.127165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.127205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.127490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.127529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.127789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.127803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.128114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.128155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.128506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.128546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.128908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.128921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.129208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.129221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.129484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.129524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.129898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.129939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.130251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.130291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.130635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.130675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.131068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.131108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.131476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.131515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.131814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.131855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.132209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.132243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.132523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.132563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.132926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.132966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.192 [2024-06-10 11:49:02.133276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.192 [2024-06-10 11:49:02.133316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.192 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.133691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.133733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.134093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.134106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.134400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.134440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.134828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.134869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.135145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.135185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.135533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.135573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.135886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.135927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.136271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.136311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.136628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.136669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.136989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.137030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.137395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.137435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.137796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.137841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.138212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.138252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.138620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.138661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.138955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.138995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.139363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.139403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.139750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.139791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.140156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.140196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.140502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.140541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.140861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.140908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.141275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.141315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.141665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.141706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.142020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.142060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.142427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.142467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.142836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.142872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.143094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.143107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.143411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.143451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.143760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.143800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.144160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.144211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.144594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.144635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.144987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.144999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.145326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.145366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.145735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.145776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.146157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.146198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.146500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.146540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.146871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.146911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.147202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.147242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.147587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.193 [2024-06-10 11:49:02.147627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.193 qpair failed and we were unable to recover it. 00:40:37.193 [2024-06-10 11:49:02.147912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.147925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.148222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.148263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.148630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.148670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.149043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.149083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.149450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.149490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.149830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.149873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.150255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.150295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.150662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.150703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.151083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.151124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.151468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.151508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.151802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.151816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.151981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.152020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.152409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.152449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.152682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.152723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.153069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.153109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.153491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.153530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.153841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.153882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.154246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.154286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.154569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.154620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.154958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.154998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.155364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.155404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.155789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.155804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.156115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.156155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.156535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.156585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.156926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.156966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.157343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.157383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.157748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.157789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.158160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.158200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.158565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.158614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.158970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.159003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.159289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.159329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.159613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.159654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.159950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.159990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.160380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.160420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.160788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.160828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.161187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.161227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.161615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.161656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.161949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.194 [2024-06-10 11:49:02.161988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.194 qpair failed and we were unable to recover it. 00:40:37.194 [2024-06-10 11:49:02.162342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.162377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.162762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.162803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.163127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.163168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.163534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.163574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.163955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.163995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.164271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.164283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.164535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.164587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.164870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.164910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.165274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.165287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.165570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.165618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.165992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.166033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.166330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.166370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.166684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.166697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.166950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.166990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.167338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.167378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.167743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.167784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.168146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.168159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.168456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.168496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.168816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.168828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.169120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.169159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.169530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.169570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.169850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.169863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.170097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.170136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.170482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.170528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.170914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.170956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.171253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.171293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.171657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.171698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.172067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.172107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.172421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.172461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.172828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.172869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.173245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.173286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.173615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.173659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.173911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.173925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.174164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.174195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.174530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.174570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.174930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.174970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.175357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.175397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.175773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.195 [2024-06-10 11:49:02.175814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.195 qpair failed and we were unable to recover it. 00:40:37.195 [2024-06-10 11:49:02.176109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.176123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.176425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.176466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.176754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.176795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.177089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.177129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.177491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.177532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.177975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.178055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4868000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.178455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.178498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4868000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.178859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.178900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4868000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.179175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.179190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.179419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.179432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.179725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.179766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.180066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.180107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.180478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.180518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.180828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.180882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.181191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.181232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.181612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.181654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.182022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.182062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.182427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.182468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.182837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.182878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.183159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.183199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.183567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.183619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.183992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.184032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.184309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.184349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.184704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.184745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.185090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.185130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.185522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.185568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.185938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.185973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.186257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.186271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.186583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.186596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.186876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.186889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.187174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.187218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.187597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.187638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.187996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.188010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.188295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.188335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.188569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.188621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.188969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.189009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.189239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.189279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.189671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.189712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.190003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.190043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.190292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.190332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.190634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.190674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.191035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.191074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.191464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.191504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.191891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.191931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.192227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.192267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.192641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.192682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.192985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.192998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.193375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.193415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.193788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.193829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.194196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.194236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.194610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.194650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.196 qpair failed and we were unable to recover it. 00:40:37.196 [2024-06-10 11:49:02.194919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.196 [2024-06-10 11:49:02.194933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.195180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.195193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.195488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.195528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.195817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.195857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.196202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.196243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.196542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.196591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.196886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.196925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.197207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.197248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.197616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.197658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.198029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.198069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.198468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.198508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.198869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.198912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.199260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.199273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.199502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.199515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.199768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.199783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.200045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.200095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.200393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.200433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.200785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.200825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.201171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.201212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.201437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.201477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.201871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.201912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.202259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.202299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.202588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.202630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.202994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.203034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.203318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.203331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.203655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.203695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.204083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.204097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.204385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.204399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.204712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.204753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.205035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.205075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.205420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.205455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.205777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.205819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.206166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.206206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.206599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.206640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.206986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.207026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.207343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.207356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.207648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.207689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.207967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.208007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.208294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.208307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.208620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.208661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.209014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.209054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.209407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.209420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.209596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.209610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.209918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.209931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.210237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.210277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.210555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.210606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.210974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.211014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.211367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.211407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.211777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.211818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.212162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.212175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.212488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.212528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.212904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.212945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.213307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.213348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.213705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.213746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.214097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.214137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.214429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.214442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.214674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.214688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.214998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.215012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.215304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.215344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.215712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.215754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.216097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.216111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.216450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.216463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.216709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.216749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.217056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.217096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.217367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.217381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.217697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.217739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.218112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.218152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.218544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.218595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.218881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.218922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.219206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.219246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.219552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.219602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.219960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.219973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.220286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.220300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.197 qpair failed and we were unable to recover it. 00:40:37.197 [2024-06-10 11:49:02.220539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.197 [2024-06-10 11:49:02.220552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.220900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.220914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.221173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.221187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.221478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.221492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.221789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.221830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.222198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.222238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.222535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.222595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.222951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.222991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.223245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.223292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.223665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.223706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.224057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.224097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.224406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.224447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.224722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.224763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.225123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.225163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.225538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.225588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.225929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.225969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.226269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.226283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.226613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.226655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.226939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.226979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.227352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.227393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.227773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.227830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.228201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.228251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.228567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.228616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.228987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.229027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.229288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.229302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.229623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.229665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.230014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.230054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.230441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.230482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.230728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.230769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.231117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.231157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.231538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.231591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.231941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.231981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.232265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.232304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.232625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.232666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.232961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.233001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.233307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.233320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.233618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.233659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.233959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.234000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.234395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.234435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.234803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.234845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.235153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.235166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.235419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.235465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.235820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.235860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.236177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.236217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.236448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.236488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.236882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.236923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.237246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.237287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.237588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.237628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.237975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.238022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.238321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.238361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.238708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.238750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.239056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.239096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.239468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.239508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.239889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.239930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.240229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.240269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.240637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.240678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.240959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.240999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.241298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.241338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.241735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.241776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.242077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.242118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.242433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.242473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.242789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.242830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.243193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.243233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.243553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.243602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.243981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.244021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.244368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.244409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.198 [2024-06-10 11:49:02.244780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.198 [2024-06-10 11:49:02.244821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.198 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.245194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.245234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.245520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.245561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.245935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.245975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.246227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.246267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.246641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.246682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.246953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.246966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.247296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.247336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.247709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.247750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.248122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.248163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.248446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.248486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.248837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.248878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.249235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.249275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.249653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.249694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.250065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.250105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.250413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.250426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.250726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.250767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.251133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.251173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.251535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.251569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.251953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.251993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.252359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.252372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.252602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.252616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.252929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.252975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.253349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.253390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.253787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.253828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.254112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.254152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.254546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.254595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.254889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.254929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.255277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.255317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.255707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.255748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.256117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.256163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.256480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.256521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.256886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.256927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.257217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.257230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.257547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.257598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.257893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.257933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.258291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.258331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.258649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.258691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.259075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.259115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.259490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.259531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.259866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.259907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.260282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.260322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.260696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.260737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.261100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.261140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.261445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.261485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.261886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.261928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.262240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.262253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.262615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.262657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.263030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.263071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.263448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.263489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.263870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.263911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.264271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.264311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.264619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.264660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.265032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.265072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.265360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.265400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.265796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.265837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.266164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.266204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.266589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.266630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.266941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.266982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.267354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.267393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.267712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.267753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.268126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.268166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.268514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.268561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.268929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.268969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.269263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.269303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.269676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.269718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.270003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.270044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.270413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.270453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.270807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.270849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.271100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.271114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.271446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.271486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.271769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.271812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.272199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.272239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.272609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.199 [2024-06-10 11:49:02.272651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.199 qpair failed and we were unable to recover it. 00:40:37.199 [2024-06-10 11:49:02.273000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.273040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.273410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.273450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.273831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.273873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.274248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.274288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.274611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.274652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.275020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.275034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.275347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.275360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.275681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.275723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.276074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.276115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.276480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.276520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.276908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.276949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.277261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.277297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.277619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.277660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.277975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.278015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.278392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.278432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.278817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.278859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.279246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.279287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.279520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.279560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.279952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.279993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.280347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.280388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.280776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.280818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.281113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.281127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.281380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.281394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.281712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.281727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.281958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.281972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.282209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.282223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.282454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.282468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.282812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.282853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.283230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.283284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.283462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.283477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.283700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.283715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.284078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.284118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.284422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.284462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.284745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.284786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.200 [2024-06-10 11:49:02.285094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.200 [2024-06-10 11:49:02.285108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.200 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.285367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.285382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.285677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.285691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.285931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.285945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.286236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.286250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.286402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.286416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.286679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.286693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.286986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.287000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.287294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.287309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.287633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.287648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.287973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.288014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.288387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.288428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.288803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.288845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.289223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.289264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.289638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.289679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.290031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.290071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.290461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.290502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.290892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.290940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.291261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.291301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.291675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.291717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.292100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.292140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.292425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.292466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.292839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.470 [2024-06-10 11:49:02.292881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.470 qpair failed and we were unable to recover it. 00:40:37.470 [2024-06-10 11:49:02.293092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.293133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.293419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.293433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.293727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.293768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.294072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.294113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.294466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.294508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.294842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.294883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.295255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.295295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.295597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.295638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.295989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.296029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.296410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.296451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.296826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.296868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.297222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.297268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.297586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.297627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.297867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.297908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.298206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.298247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.298566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.298634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.299017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.299058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.299414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.299454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.299838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.299880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.300253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.300294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.300610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.300652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.301032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.301072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.301343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.301357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.301646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.301660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.301904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.301932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.302258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.302298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.302671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.302712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.303097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.303138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.303470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.303510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.303821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.303861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.304165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.304178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.304466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.304480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.304792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.304834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.305206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.305247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.305626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.305667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.305974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.306015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.306381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.306395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.306666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.471 [2024-06-10 11:49:02.306680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.471 qpair failed and we were unable to recover it. 00:40:37.471 [2024-06-10 11:49:02.306947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.306961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.307273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.307286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.307587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.307628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.307999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.308039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.308414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.308454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.308828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.308870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.309255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.309295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.309523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.309564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.309948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.309990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.310362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.310402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.310774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.310815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.311197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.311237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.311621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.311662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.312038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.312085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.312360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.312374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.312564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.312618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.312923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.312964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.313287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.313327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.313678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.313720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.314065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.314079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.314319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.314333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.314565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.314588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.314900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.314913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.315146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.315160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.315497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.315538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.315955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.315996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.316368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.316409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.316790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.316831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.317203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.317256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.317495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.317519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.317827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.317841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.318157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.318197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.318593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.318634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.319006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.319046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.319408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.319422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.319700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.319725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.319944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.319959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.472 [2024-06-10 11:49:02.320254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.472 [2024-06-10 11:49:02.320268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.472 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.320504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.320518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.320809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.320824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.321150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.321164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.321427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.321441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.321700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.321715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.322006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.322020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.322332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.322346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.322699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.322713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.323031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.323045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.323292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.323306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.323564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.323582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.323851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.323865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.324167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.324181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.324442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.324456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.324717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.324732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.324951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.324967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.325223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.325238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.325479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.325493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.325741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.325755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.326010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.326024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.326262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.326276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.326533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.326548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.326806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.326820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.327134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.327148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.327439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.327453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.327703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.327718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.328032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.328046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.328275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.328289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.328534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.328549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.328782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.328797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.329064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.329078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.329370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.329384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.329613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.329628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.329814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.329829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.330081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.330094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.330267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.330281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.330591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.330605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.473 [2024-06-10 11:49:02.330918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.473 [2024-06-10 11:49:02.330932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.473 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.331243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.331257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.331540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.331555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.331820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.331835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.332096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.332110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.332427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.332441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.332756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.332770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.333016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.333029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.333272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.333286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.333596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.333610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.333974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.333988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.334276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.334290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.334508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.334522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.334820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.334834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.335170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.335185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.335445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.335459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.335722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.335736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.335970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.335984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.336271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.336288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.336617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.336631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.336923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.336937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.337250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.337263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.337572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.337593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.337909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.337923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.338097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.338111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.338429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.338443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.338700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.338714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.339034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.339048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.339374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.339387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.339702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.339716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.340004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.340018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.340329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.340343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.340608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.340622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.474 [2024-06-10 11:49:02.340876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.474 [2024-06-10 11:49:02.340890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.474 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.341186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.341200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.341423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.341437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.341727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.341741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.341985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.341999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.342218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.342232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.342471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.342485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.342796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.342811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.343123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.343138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.343427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.343441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.343732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.343746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.344059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.344073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.344401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.344415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.344655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.344669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.344981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.344996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.345251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.345265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.345487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.345501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.345754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.345768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.346084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.346097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.346327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.346340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.346586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.346600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.346896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.346910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.347142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.347156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.347458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.347472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.347783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.347798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.348019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.348035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.348348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.348362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.348601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.348615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.348919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.348933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.349246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.349260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.349568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.349586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.349819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.349833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.350092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.350106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.350391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.350405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.350704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.350717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.351042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.351056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.475 [2024-06-10 11:49:02.351345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.475 [2024-06-10 11:49:02.351358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.475 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.351668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.351682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.351916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.351930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.352243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.352257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.352573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.352591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.352945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.352958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.353243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.353257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.353567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.353586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.353871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.353885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.354195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.354209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.354526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.354540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.354756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.354769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.355008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.355022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.355380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.355393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.355639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.355653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.355917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.355931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.356224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.356238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.356546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.356559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.356810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.356824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.357042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.357055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.357340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.357354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.357598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.357612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.357922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.357935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.358246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.358260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.358522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.358536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.358832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.358845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.359084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.359097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.359331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.359345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.359585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.359599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.359850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.359866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.360174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.360187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.360445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.360459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.360753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.360767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.361048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.361061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.361374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.361388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.361702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.361716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.362003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.362017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.362305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.362318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.362630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.476 [2024-06-10 11:49:02.362644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.476 qpair failed and we were unable to recover it. 00:40:37.476 [2024-06-10 11:49:02.362825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.362838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.363118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.363131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.363426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.363439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.363772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.363786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.364042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.364056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.364348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.364362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.364669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.364683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.364898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.364911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.365242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.365256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.365584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.365597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.365815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.365829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.366043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.366057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.366364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.366378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.366615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.366629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.366912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.366926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.367256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.367270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.367581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.367595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.367893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.367907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.368148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.368162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.368470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.368484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.368792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.368806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.369048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.369062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.369364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.369377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.369663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.369676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.369948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.369962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.370267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.370280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.370533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.370547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.370884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.370898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.371180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.371194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.371499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.371513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.371762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.371779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.372075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.372088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.372400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.372413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.372675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.372689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.373020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.373034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.373375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.373389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.373677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.373691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.373929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.477 [2024-06-10 11:49:02.373942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.477 qpair failed and we were unable to recover it. 00:40:37.477 [2024-06-10 11:49:02.374242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.374256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.374422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.374436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.374658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.374672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.374937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.374951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.375256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.375269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.375581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.375595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.375902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.375915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.376174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.376187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.376351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.376365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.376668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.376682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.377018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.377032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.377284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.377298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.377602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.377615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.377925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.377939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.378168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.378182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.378423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.378436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.378665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.378679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.378936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.378949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.379242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.379255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.379532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.379546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.379829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.379843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.380149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.380163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.380394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.380408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.380646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.380660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.380969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.380983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.381228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.381241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.381486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.381500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.381805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.381819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.382094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.382107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.382429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.382443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.382701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.382714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.382943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.382957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.383260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.383275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.383533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.383547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.383717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.383730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.383972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.383985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.384289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.384303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.384610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.384624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.478 qpair failed and we were unable to recover it. 00:40:37.478 [2024-06-10 11:49:02.384934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.478 [2024-06-10 11:49:02.384947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.385270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.385283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.385634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.385648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.385976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.385990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.386244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.386258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.386493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.386507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.386816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.386829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.387055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.387067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.387303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.387317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.387515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.387527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.387765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.387779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.388082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.388095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.388321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.388335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.388619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.388632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.388916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.388929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.389184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.389198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.389457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.389471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.389757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.389770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.390077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.390091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.390342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.390356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.390570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.390594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.390753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.390769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.391003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.391017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.391176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.391189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.391404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.391418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.391710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.391724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.392028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.392042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.392275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.392288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.392572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.392590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.392814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.392827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.393114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.393127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.393432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.393446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.393701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.479 [2024-06-10 11:49:02.393715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.479 qpair failed and we were unable to recover it. 00:40:37.479 [2024-06-10 11:49:02.394019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.394033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.394263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.394277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.394589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.394603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.394908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.394921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.395149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.395162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.395478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.395492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.395795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.395808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.396115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.396128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.396434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.396447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.396752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.396766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.397048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.397061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.397357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.397370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.397689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.397703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.397985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.397999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.398295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.398308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.398638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.398652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.398956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.398970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.399268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.399281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.399521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.399534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.399861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.399874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.400107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.400120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.400360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.400374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.400598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.400611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.400893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.400907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.401194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.401208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.401434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.401447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.401672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.401685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.401865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.401879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.402104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.402120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.402404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.402417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.402593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.402605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.402841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.402855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.403188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.403201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.403371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.403384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.403679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.403693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.403932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.403946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.404273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.404287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.404514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.404527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.480 [2024-06-10 11:49:02.404827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.480 [2024-06-10 11:49:02.404840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.480 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.405122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.405136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.405451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.405464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.405768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.405793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.406102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.406115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.406437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.406451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.406673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.406687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.406917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.406931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.407238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.407251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.407538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.407552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.407773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.407787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.408077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.408091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.408373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.408386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.408712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.408725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.408976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.408990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.409282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.409295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.409603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.409616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.409833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.409845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.410174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.410187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.410495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.410509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.410763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.410777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.411009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.411023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.411273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.411286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.411594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.411607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.411769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.411782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.412033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.412046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.412353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.412366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.412592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.412606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.412908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.412922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.413150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.413163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.413468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.413483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.413700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.413714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.414020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.414034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.414291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.414304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.414610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.414624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.414865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.414879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.415197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.415211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.415493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.415506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.415732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.481 [2024-06-10 11:49:02.415745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.481 qpair failed and we were unable to recover it. 00:40:37.481 [2024-06-10 11:49:02.416050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.416063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.416278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.416291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.416562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.416579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.416885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.416899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.417131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.417144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.417453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.417466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.417724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.417737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.418025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.418038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.418341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.418355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.418666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.418680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.418892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.418905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.419229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.419242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.419527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.419541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.419843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.419856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.420112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.420125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.420418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.420431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.420710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.420723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.421018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.421032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.421367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.421407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.421772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.421813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.422160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.422200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.422538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.422586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.422956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.422996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.423289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.423330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.423615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.423656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.423973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.424013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.424379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.424419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.424760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.424802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.425166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.425207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.425554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.425604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.425964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.426017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.426379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.426430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.426713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.426726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.426953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.426966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.427182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.427195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.427433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.427473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.427856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.427897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.428191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.482 [2024-06-10 11:49:02.428231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.482 qpair failed and we were unable to recover it. 00:40:37.482 [2024-06-10 11:49:02.428527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.428567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.428895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.428935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.429280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.429320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.429551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.429601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.429970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.430010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.430375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.430415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.430788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.430828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.431205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.431246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.431516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.431529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.431811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.431824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.432130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.432169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.432522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.432562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.432885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.432925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.433282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.433323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.433616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.433656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.434018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.434058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.434423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.434463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.434770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.434812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.435180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.435219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.435531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.435571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.435990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.436004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.436237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.436260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.436495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.436535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.436780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.436821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.437121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.437161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.437454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.437494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.437860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.437900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.438268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.438307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.438674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.438715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.439084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.439124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.439420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.439460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.439805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.439846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.440161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.440212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.440461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.440502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.440873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.440914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.441257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.441297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.441601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.441613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.441853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.441893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.483 [2024-06-10 11:49:02.442243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.483 [2024-06-10 11:49:02.442283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.483 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.442675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.442717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.442998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.443038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.443398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.443437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.443728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.443769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.444125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.444166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.444469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.444509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.444904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.444945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.445248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.445288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.445585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.445627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.446025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.446065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.446430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.446471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.446808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.446821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.447098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.447111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.447336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.447378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.447651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.447692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.447986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.448026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.448393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.448433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.448710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.448723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.449029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.449042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.449279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.449319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.449666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.449707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.450078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.450119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.450396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.450436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.450776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.450800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.451106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.451146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.451490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.451529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.451847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.451888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.452254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.452293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.452640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.452681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.453044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.453083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.453448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.453488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.484 qpair failed and we were unable to recover it. 00:40:37.484 [2024-06-10 11:49:02.453848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.484 [2024-06-10 11:49:02.453862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.454165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.454205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.454557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.454609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.454978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.455025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.455393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.455433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.455782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.455795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.456140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.456180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.456547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.456596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.456933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.456973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.457339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.457379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.457746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.457787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.458083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.458123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.458466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.458506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.458904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.458945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.459261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.459301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.459681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.459721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.460087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.460127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.460501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.460541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.460879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.460920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.461286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.461326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.461693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.461734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.462024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.462054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.462400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.462439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.462732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.462773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.463089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.463130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.463475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.463515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.463847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.463888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.464233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.464273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.464635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.464677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.465025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.465065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.465449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.465489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.465780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.465794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.466106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.466146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.466505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.466545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.466902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.466942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.467321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.467361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.467725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.467766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.468070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.468109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.485 [2024-06-10 11:49:02.468493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.485 [2024-06-10 11:49:02.468533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.485 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.468915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.468956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.469323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.469362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.469642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.469655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.469967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.470006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.470286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.470332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.470621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.470675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.470980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.471020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.471249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.471289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.471683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.471724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.472067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.472107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.472492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.472532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.472841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.472882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.473162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.473202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.473540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.473591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.473971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.474010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.474378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.474418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.474791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.474833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.475203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.475243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.475622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.475663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.476032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.476072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.476347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.476387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.476667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.476708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.477051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.477091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.477458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.477498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.477788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.477801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.478045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.478085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.478449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.478488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.478729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.478770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.479065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.479105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.479421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.479461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.479819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.479833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.480063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.480076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.480319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.480332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.480615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.480655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.480966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.481006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.481368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.481408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.481793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.481834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.482113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.486 [2024-06-10 11:49:02.482153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.486 qpair failed and we were unable to recover it. 00:40:37.486 [2024-06-10 11:49:02.482510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.482550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.482930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.482971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.483249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.483289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.483640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.483681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.484028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.484068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.484479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.484519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.484850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.484898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.485177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.485218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.485587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.485628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.485977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.486028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.486393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.486444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.486692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.486726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.487099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.487140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.487444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.487484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.487844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.487857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.488153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.488193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.488472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.488512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.488803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.488839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.489128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.489167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.489383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.489424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.489800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.489842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.490211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.490250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.490634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.490675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.491049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.491089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.491405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.491445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.491794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.491835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.492219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.492259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.492628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.492669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.493035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.493076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.493446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.493486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.493883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.493924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.494239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.494279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.494512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.494552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.494951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.494992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.495361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.495401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.495766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.495807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.496087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.496127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.496483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.496524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.487 qpair failed and we were unable to recover it. 00:40:37.487 [2024-06-10 11:49:02.496919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.487 [2024-06-10 11:49:02.496961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.497307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.497346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.497595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.497609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.497917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.497931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.498243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.498283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.498636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.498677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.499056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.499069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.499340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.499380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.499681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.499733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.500035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.500048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.500406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.500446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.500850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.500887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.501200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.501241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.501482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.501523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.501908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.501949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.502228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.502268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.502623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.502664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.503046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.503086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.503454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.503495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.503889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.503930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.504230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.504270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.504615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.504657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.505045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.505085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.505430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.505469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.505837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.505851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.506167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.506208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.506554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.506611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.506915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.506929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.507222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.507262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.507540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.507589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.507958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.507998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.508347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.508387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.508685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.488 [2024-06-10 11:49:02.508727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.488 qpair failed and we were unable to recover it. 00:40:37.488 [2024-06-10 11:49:02.509097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.509137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.509509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.509549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.509861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.509875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.510038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.510050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.510365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.510405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.510754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.510800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.511106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.511146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.511438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.511478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.511774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.511814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.512165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.512204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.512494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.512533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.512891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.512932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.513306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.513346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.513722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.513763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.514133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.514173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.514522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.514568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.514928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.514968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.515250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.515291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.515639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.515681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.516054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.516094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.516470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.516509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.516807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.516849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.517220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.517260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.517594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.517636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.518011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.518051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.518424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.518465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.518716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.518729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.519032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.519073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.519466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.519507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.519897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.519938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.520331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.520372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.520739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.520781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.521151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.521165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.521481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.521521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.521900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.521941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.522314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.522354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.522707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.522749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.523134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.523174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.489 qpair failed and we were unable to recover it. 00:40:37.489 [2024-06-10 11:49:02.523538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.489 [2024-06-10 11:49:02.523587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.523887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.523928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.524297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.524336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.524708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.524759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.525002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.525016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.525406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.525447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.525840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.525882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.526246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.526287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.526657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.526698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.526981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.527021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.527296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.527336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.527646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.527687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.528058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.528098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.528457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.528497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.528882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.528924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.529293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.529333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.529640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.529681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.530048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.530094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.530465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.530505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.530886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.530927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.531301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.531341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.531636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.531677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.532045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.532086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.532388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.532428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.532776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.532817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.533117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.533130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.533461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.533474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.533787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.533828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.534196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.534236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.534604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.534645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.535007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.535020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.535369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.535409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.535760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.535801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.536085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.536098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.536323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.536336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.536674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.536715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.537085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.537126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.537496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.537536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.490 [2024-06-10 11:49:02.537896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.490 [2024-06-10 11:49:02.537937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.490 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.538292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.538332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.538714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.538755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.539126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.539166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.539443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.539484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.539864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.539878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.540160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.540173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.540489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.540530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.540900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.540942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.541245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.541286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.541657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.541699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.542063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.542103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.542473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.542513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.542901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.542942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.543290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.543330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.543702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.543744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.544124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.544164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.544537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.544587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.544959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.545000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.545370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.545415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.545764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.545805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.546186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.546226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.546523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.546564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.546921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.546962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.547237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.547277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.547668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.547710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.547989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.548029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.548381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.548420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.548823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.548865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.549237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.549277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.549646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.549690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.549987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.549999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.550231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.550244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.550589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.550630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.551012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.551052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.551373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.551413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.551767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.551808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.552178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.552219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.552517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.491 [2024-06-10 11:49:02.552556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.491 qpair failed and we were unable to recover it. 00:40:37.491 [2024-06-10 11:49:02.552863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.552904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.553306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.553345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.553719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.553759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.554133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.554174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.554470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.554510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.554888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.554901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.555178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.555191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.555420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.555433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.555678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.555720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.556037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.556077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.556450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.556491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.556882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.556923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.557221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.557278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.557608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.557649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.558020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.558061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.558480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.558520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.558858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.558872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.559043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.559057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.559394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.559408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.559655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.559669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.559990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.560004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.560226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.560239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.560504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.560518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.560773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.560787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.561074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.561088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.561363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.561403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.561740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.561781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.562159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.562199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.492 [2024-06-10 11:49:02.562493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.492 [2024-06-10 11:49:02.562533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.492 qpair failed and we were unable to recover it. 00:40:37.765 [2024-06-10 11:49:02.562866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.765 [2024-06-10 11:49:02.562881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.765 qpair failed and we were unable to recover it. 00:40:37.765 [2024-06-10 11:49:02.563194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.765 [2024-06-10 11:49:02.563209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.765 qpair failed and we were unable to recover it. 00:40:37.765 [2024-06-10 11:49:02.563529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.765 [2024-06-10 11:49:02.563543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.765 qpair failed and we were unable to recover it. 00:40:37.765 [2024-06-10 11:49:02.563820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.765 [2024-06-10 11:49:02.563834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.765 qpair failed and we were unable to recover it. 00:40:37.765 [2024-06-10 11:49:02.564126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.765 [2024-06-10 11:49:02.564140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.765 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.564411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.564425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.564632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.564646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.564973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.565014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.565349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.565389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.565761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.565803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.566093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.566133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.566463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.566504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.566830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.566845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.567163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.567203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.567553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.567608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.567850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.567863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.568157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.568191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.568426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.568466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.568761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.568778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.569086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.569100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.569436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.569450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.569689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.569703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.569939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.569953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.570291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.570305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.570567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.570586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.570790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.570804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.571094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.571109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.571351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.571365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.571633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.571648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.571820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.571834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.572050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.572064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.572304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.572318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.572566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.572586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.572805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.572819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.573045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.573058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.573377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.573391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.573701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.573716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.574043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.574057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.574336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.574350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.574595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.574609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.574929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.574944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.575149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.575162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.766 qpair failed and we were unable to recover it. 00:40:37.766 [2024-06-10 11:49:02.575454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.766 [2024-06-10 11:49:02.575469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.575703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.575719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.576055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.576070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.576413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.576428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.576763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.576777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.577119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.577133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.577391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.577405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.577643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.577657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.577967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.577981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.578172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.578186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.578434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.578448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.578682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.578696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.579007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.579021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.579335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.579349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.579519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.579533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.579771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.579785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.580121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.580137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.580472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.580486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.580650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.580663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.580969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.580983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.581242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.581256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.581507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.581521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.581814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.581829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.582168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.582182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.582518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.582532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.582823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.582838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.583070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.583084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.583386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.583400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.583707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.583721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.584032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.584046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.584290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.584304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.584611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.584625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.584939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.584952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.585216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.585230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.585568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.585588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.585832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.585846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.586084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.586097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.586266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.586280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.767 [2024-06-10 11:49:02.586590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.767 [2024-06-10 11:49:02.586604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.767 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.586869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.586883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.587057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.587070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.587419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.587433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.587709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.587723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.588018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.588032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.588319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.588333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.588648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.588663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.588975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.588990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.589220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.589235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.589405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.589418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.589746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.589760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.589954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.589967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.590202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.590216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.590531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.590547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.590796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.590810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.590995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.591009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.591184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.591198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.591359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.591375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.591677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.591692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.591874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.591888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.592134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.592147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.592461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.592475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.592665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.592679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.592898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.592912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.593070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.593083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.593374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.593387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.593632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.593649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.593937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.593952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.594273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.594287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.594570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.594597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.594898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.594912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.595147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.595162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.595430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.595444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.595682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.595697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.595986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.596001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.596307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.596321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.596614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.596630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.768 [2024-06-10 11:49:02.596861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.768 [2024-06-10 11:49:02.596875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.768 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.597112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.597126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.597393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.597408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.597640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.597655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.597948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.597962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.598206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.598220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.598540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.598554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.598803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.598817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.599133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.599147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.599408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.599422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.599720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.599738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.599979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.599996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.600219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.600233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.600520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.600535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.600786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.600801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.601063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.601077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.601387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.601400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.601599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.601612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.601837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.601853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.602073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.602087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.602392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.602409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.602698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.602712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.602936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.602950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.603123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.603137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.603391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.603405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.603641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.603655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.603943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.603957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.604292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.604306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.604522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.604536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.604772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.604786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.604969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.604983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.605135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.605149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.605452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.605466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.605780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.605794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.606040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.606055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.606372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.606385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.606610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.606624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.606955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.606969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.769 [2024-06-10 11:49:02.607202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.769 [2024-06-10 11:49:02.607217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.769 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.607520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.607534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.607906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.607920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.608162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.608176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.608473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.608487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.608728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.608742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.608909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.608940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.609223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.609237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.609476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.609490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.609710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.609724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.609993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.610006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.610295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.610308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.610617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.610631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.610957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.610971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.611200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.611214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.611528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.611542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.611831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.611845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.612101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.612114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.612403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.612416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.612739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.612753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.612983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.612997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.613285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.613299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.613631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.613647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.613934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.613948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.614254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.614268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.614563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.614581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.614883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.614896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.615203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.615216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.615413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.615427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.615718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.615732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.616039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.616053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.616272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.770 [2024-06-10 11:49:02.616286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.770 qpair failed and we were unable to recover it. 00:40:37.770 [2024-06-10 11:49:02.616567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.616586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.616883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.616897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.617163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.617177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.617392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.617406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.617562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.617580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.617907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.617920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.618144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.618159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.618466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.618479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.618722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.618736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.619078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.619092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.619431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.619445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.619781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.619796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.620126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.620139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.620382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.620395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.620683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.620697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.620925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.620939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.621239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.621253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.621541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.621554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.621849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.621863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.622086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.622100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.622377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.622390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.622699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.622712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.623010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.623024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.623303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.623316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.623602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.623616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.623873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.623887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.624195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.624208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.624510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.624527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.624813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.624827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.625058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.625071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.625375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.625391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.625721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.625735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.626019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.626032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.626268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.626281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.626503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.626515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.626773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.626786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.627097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.771 [2024-06-10 11:49:02.627111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.771 qpair failed and we were unable to recover it. 00:40:37.771 [2024-06-10 11:49:02.627315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.627328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.627571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.627590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.627940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.627953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.628190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.628203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.628500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.628513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.628763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.628778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.629003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.629017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.629186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.629200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.629422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.629435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.629665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.629679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.629950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.629964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.630260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.630274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.630581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.630595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.630816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.630829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.630950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.630964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.631209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.631222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.631508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.631522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.631735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.631747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.631965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.631979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.632182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.632195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.632431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.632444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.632774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.632788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.633089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.633103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.633433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.633446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.633689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.633702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.634023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.634036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.634343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.634356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.634598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.634612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.634861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.634874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.635060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.635074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.635379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.635392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.635610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.635623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.635908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.635921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.636143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.636159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.636454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.636467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.772 [2024-06-10 11:49:02.636771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.772 [2024-06-10 11:49:02.636785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.772 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.637080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.637094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.637402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.637416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.637719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.637732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.637968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.637981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.638283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.638327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.638573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.638623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.638969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.639027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.639322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.639362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.639674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.639714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.639961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.640002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.640410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.640450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.640808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.640850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.641083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.641123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.641422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.641462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.641776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.641817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.642046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.642086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.642427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.642459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.642839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.642880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.643160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.643200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.643489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.643529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.643905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.643946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.644255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.644269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.644498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.644512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.644829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.644871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.645115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.645156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.645471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.645511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.645808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.645849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.646219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.646258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.646556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.646604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.646973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.647013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.647262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.647301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.647646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.647687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.647971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.648011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.648366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.648379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.648665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.648706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.649020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.649061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.649342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.649382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.773 [2024-06-10 11:49:02.649730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.773 [2024-06-10 11:49:02.649777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.773 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.650148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.650189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.650479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.650519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.650810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.650852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.651171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.651211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.651500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.651539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.651828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.651869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.651984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.651997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.652213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.652237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.652407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.652446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.652803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.652844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.653124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.653138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.653307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.653347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.653648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.653689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.654717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.654742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.655059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.655073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.655367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.655408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.655642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.655684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.655921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.655935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.656095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.656109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.656350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.656391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.656627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.656669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.657033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.657073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.657387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.657427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.657733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.657773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.658118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.658158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.658445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.658485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.658865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.658907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.659279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.659320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.659552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.659603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.659895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.659935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.660247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.660270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.660436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.660449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.660748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.660762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.661004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.661017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.661277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.661318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.661664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.661705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.661945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.661985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.662257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.774 [2024-06-10 11:49:02.662297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.774 qpair failed and we were unable to recover it. 00:40:37.774 [2024-06-10 11:49:02.662646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.662687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.662908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.662955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.663255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.663295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.663688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.663729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.664533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.664552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.664827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.664841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.665082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.665095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.665325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.665339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.665632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.665647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.665954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.665968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.666165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.666178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.666397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.666410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.666669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.666683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.666913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.666927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.667105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.667118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.667300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.667314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.667582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.667595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.667759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.667773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.668050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.668064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.668280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.668293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.668521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.668535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.668698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.668711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.668875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.668888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.669098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.669111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.669337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.669351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.669468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.669480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.669764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.669778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.669913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.669925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.670154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.670168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.670330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.670343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.670556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.670569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.670815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.670829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.671039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.775 [2024-06-10 11:49:02.671053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.775 qpair failed and we were unable to recover it. 00:40:37.775 [2024-06-10 11:49:02.671214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.671227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.671349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.671388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.671631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.671675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.671970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.672011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.672296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.672310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.672574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.672606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.672774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.672796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.673099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.673113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.673366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.673383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.673671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.673684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.673904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.673918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.674136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.674149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.674313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.674327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.674491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.674505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.674749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.674762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.675017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.675030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.675200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.675214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.675367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.675380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.675665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.675680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.675853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.675893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.676192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.676231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.676569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.676653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.677025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.677065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.677273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.677314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.677612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.677626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.677916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.677956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.678246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.678286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.678591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.678631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.678842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.678882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.679167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.679207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.679529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.679596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.679960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.680000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.680211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.680269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.680558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.680611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.680956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.680996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.681357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.681398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.681695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.681736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.681974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.682014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.776 qpair failed and we were unable to recover it. 00:40:37.776 [2024-06-10 11:49:02.682331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.776 [2024-06-10 11:49:02.682371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.682670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.682710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.682937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.682977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.683263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.683275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.683492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.683506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.683675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.683689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.683898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.683911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.684222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.684262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.684551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.684634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.684913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.684953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.685260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.685307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.685680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.685721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.685997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.686037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.686360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.686400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.686683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.686724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.687092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.687132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.687407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.687448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.687818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.687858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.688205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.688245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.688598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.688639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.688882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.688922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.689200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.689240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.689517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.689557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.689850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.689890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.690101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.690114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.690329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.690340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.690640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.690681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.690960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.691001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.691239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.691278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.691646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.691686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.691965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.692013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.692244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.692256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.692462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.692474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.692693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.692733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.692957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.692998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.693344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.693381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.693620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.693633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.693937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.693950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.777 qpair failed and we were unable to recover it. 00:40:37.777 [2024-06-10 11:49:02.694209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.777 [2024-06-10 11:49:02.694222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.694379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.694391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.694604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.694644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.694992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.695032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.695350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.695391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.695687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.695727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.696001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.696015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.696251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.696291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.696668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.696708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.696988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.697036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.697195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.697206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.697420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.697432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.697622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.697636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.697915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.697953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.698229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.698269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.698479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.698491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.698802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.698842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.699211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.699250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.699517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.699530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.699681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.699694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.700019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.700031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.700255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.700267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.700586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.700626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.700855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.700894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.701174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.701187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.701347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.701359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.701572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.701596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.701893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.701932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.702295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.702335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.702570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.702588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.702763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.702776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.703107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.703148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.703509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.703549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.703906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.703947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.704182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.704222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.704597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.704638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.705009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.705049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.705330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.705371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.778 [2024-06-10 11:49:02.705623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.778 [2024-06-10 11:49:02.705665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.778 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.705957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.705998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.706366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.706406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.706768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.706809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.707110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.707150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.707443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.707484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.707781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.707822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.708191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.708231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.708598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.708639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.708885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.708925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.709139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.709152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.709323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.709363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.709649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.709690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.709979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.710019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.710329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.710368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.710707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.710767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.711188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.711233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.711612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.711652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.711938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.711978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.712265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.712278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.712527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.712563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.712923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.712963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.713252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.713292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.713597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.713637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.713982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.714022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.714370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.714409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.714752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.714794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.715031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.715071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.715367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.715408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.715757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.715798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.716113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.716153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.716390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.716430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.716655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.716696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.717040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.779 [2024-06-10 11:49:02.717080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.779 qpair failed and we were unable to recover it. 00:40:37.779 [2024-06-10 11:49:02.717374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.717414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.717705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.717745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.718046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.718086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.718285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.718298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.718603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.718617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.718781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.718794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.719099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.719112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.719256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.719271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.719544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.719557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.719782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.719795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.719950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.719963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.720217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.720257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.720552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.720601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.720901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.720938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.721155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.721168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.721480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.721519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.721804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.721846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.722227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.722267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.722632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.722674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.722980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.723020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.723316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.723356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.723660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.723700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.724011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.724051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.724395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.724408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.724698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.724738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.725083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.725119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.725340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.725363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.725643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.725657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.725875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.725915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.726906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.726930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.727186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.727200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.727437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.727450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.727663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.727677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.727893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.727906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.728125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.728138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.728370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.728410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.728738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.728779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.728989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.780 [2024-06-10 11:49:02.729031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.780 qpair failed and we were unable to recover it. 00:40:37.780 [2024-06-10 11:49:02.729308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.729348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.729663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.729705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.730000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.730040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.730383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.730423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.730733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.730775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.731157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.731197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.731486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.731526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.731953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.731994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.732289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.732330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.732641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.732657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.732955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.732994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.733368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.733408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.733643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.733657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.733992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.734005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.734167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.734180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.734413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.734426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.734583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.734596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.734913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.734954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.735327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.735368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.735638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.735672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.735907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.735947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.736227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.736240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.736444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.736485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.736804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.736846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.737074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.737114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.737401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.737451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.737697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.737710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.737928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.737941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.738175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.738188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.738416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.738456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.738679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.738719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.738947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.738987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.739352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.739393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.739783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.739823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.740100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.740141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.740420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.740461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.740788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.740830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.741128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.741168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.781 qpair failed and we were unable to recover it. 00:40:37.781 [2024-06-10 11:49:02.741483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.781 [2024-06-10 11:49:02.741524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.741901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.741942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.742294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.742334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.742702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.742744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.743090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.743131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.743428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.743469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.743697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.743738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.743978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.744018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.744315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.744355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.744660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.744673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.744863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.744876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.745170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.745216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.745439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.745479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.745778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.745820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.746108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.746148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.746515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.746556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.746813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.746854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.747161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.747200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.747497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.747537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.747877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.747918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.748217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.748258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.748622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.748664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.748955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.748995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.749208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.749250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.749633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.749646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.749890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.749930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.750250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.750291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.750598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.750611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.750852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.750865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.751040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.751053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.751286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.751299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.751533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.751573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.751862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.751903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.752176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.752189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.752376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.752389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.752552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.752565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.752792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.752834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.753144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.753185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.782 qpair failed and we were unable to recover it. 00:40:37.782 [2024-06-10 11:49:02.753490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.782 [2024-06-10 11:49:02.753525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.753830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.753843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.754063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.754076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.754293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.754306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.754451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.754464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.754746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.754760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.755000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.755013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.755234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.755274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.755557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.755605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.755880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.755921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.756217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.756268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.756435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.756448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.756675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.756689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.756925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.756940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.757166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.757180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.757352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.757394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.757785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.757827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.758192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.758232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.758591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.758632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.759000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.759041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.759341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.759381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.759617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.759630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.759930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.759943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.760177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.760190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.760405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.760418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.760581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.760594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.760912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.760952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.761246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.761287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.761651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.761664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.761834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.761847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.762007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.762021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.762255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.762295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.762533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.762573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.762896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.762938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.783 qpair failed and we were unable to recover it. 00:40:37.783 [2024-06-10 11:49:02.763214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.783 [2024-06-10 11:49:02.763253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.763588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.763629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.763925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.763966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.764263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.764304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.764595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.764637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.765009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.765049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.765347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.765388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.765602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.765644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.766011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.766051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.766329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.766370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.766707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.766720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.766907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.766920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.767234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.767276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.767567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.767617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.767848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.767888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.768181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.768222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.768505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.768518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.768809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.768850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.769157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.769198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.769536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.769594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.769914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.769955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.770298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.770339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.770684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.770725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.771133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.771174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.771516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.771528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.771631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.771644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.771897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.771910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.772259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.772300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.772605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.784 [2024-06-10 11:49:02.772645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.784 qpair failed and we were unable to recover it. 00:40:37.784 [2024-06-10 11:49:02.773007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.773053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.773221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.773234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.773476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.773516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.773684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.773724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.773969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.774009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.774229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.774270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.774500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.774541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.774798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.774811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.775049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.775062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.775345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.775358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.775590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.775603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.775852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.775865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.776180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.776193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.776429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.776442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.776755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.776796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.777081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.777120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.777363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.777403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.777753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.777794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.778162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.778202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.778433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.778445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.778687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.778699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.778981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.778994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.779306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.779346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.779507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.779546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.779848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.779889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.780195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.780235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.780601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.780643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.780924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.780964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.781183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.781221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.781588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.781628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.781982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.782027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.782403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.782443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.782787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.782828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.783106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.783145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.783417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.785 [2024-06-10 11:49:02.783457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.785 qpair failed and we were unable to recover it. 00:40:37.785 [2024-06-10 11:49:02.783863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.783904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.784200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.784239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.784464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.784504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.784866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.784907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.785203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.785243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.785470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.785511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.785867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.785908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.786149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.786189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.786488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.786501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.786733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.786746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.787027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.787040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.787368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.787409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.787645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.787685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.787914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.787953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.788175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.788214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.788442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.788482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.788862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.788904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.789266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.789306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.789656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.789669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.789899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.789913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.790218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.790231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.786 qpair failed and we were unable to recover it. 00:40:37.786 [2024-06-10 11:49:02.790515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.786 [2024-06-10 11:49:02.790545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.808 qpair failed and we were unable to recover it. 00:40:37.808 [2024-06-10 11:49:02.790914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.808 [2024-06-10 11:49:02.790955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.808 qpair failed and we were unable to recover it. 00:40:37.808 [2024-06-10 11:49:02.791168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.808 [2024-06-10 11:49:02.791208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.808 qpair failed and we were unable to recover it. 00:40:37.808 [2024-06-10 11:49:02.791573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.808 [2024-06-10 11:49:02.791626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.808 qpair failed and we were unable to recover it. 00:40:37.808 [2024-06-10 11:49:02.791970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.808 [2024-06-10 11:49:02.792010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.808 qpair failed and we were unable to recover it. 00:40:37.808 [2024-06-10 11:49:02.792248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.808 [2024-06-10 11:49:02.792287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.808 qpair failed and we were unable to recover it. 00:40:37.808 [2024-06-10 11:49:02.792629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.808 [2024-06-10 11:49:02.792670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.808 qpair failed and we were unable to recover it. 00:40:37.808 [2024-06-10 11:49:02.792956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.808 [2024-06-10 11:49:02.792996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.808 qpair failed and we were unable to recover it. 00:40:37.808 [2024-06-10 11:49:02.793288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.793327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.793690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.793732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.794065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.794104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.794337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.794350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.794663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.794705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.795050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.795090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.795390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.795436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.795735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.795776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.796147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.796187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.796498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.796539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.796884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.796898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.797363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.797396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.797637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.797651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.797816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.797829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.798048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.798061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.798298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.798311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.798597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.798611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.798919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.798932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.799262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.799275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.799457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.799470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.799695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.799709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.800020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.800060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.800363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.800403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.800653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.800667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.800977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.801018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.801251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.801292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.801553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.801566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.801933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.801974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.802276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.802316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.802685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.802726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.803002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.803042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.803357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.803397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.803707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.803720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.803972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.803985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.809 qpair failed and we were unable to recover it. 00:40:37.809 [2024-06-10 11:49:02.804163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.809 [2024-06-10 11:49:02.804176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.804481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.804521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.804873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.804915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.805131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.805172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.805538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.805599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.805926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.805966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.806313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.806354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.806639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.806681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.806987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.807028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.807364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.807377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.807608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.807621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.807902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.807915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.808131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.808183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.808411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.808424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.808642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.808655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.808805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.808818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.809129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.809169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.809464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.809505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.809756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.809770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.810002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.810015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.810325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.810365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.810734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.810776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.811120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.811160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.811530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.811571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.811874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.811915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.812284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.812324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.812699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.812741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.812969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.813009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.813310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.813350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.813719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.810 [2024-06-10 11:49:02.813732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.810 qpair failed and we were unable to recover it. 00:40:37.810 [2024-06-10 11:49:02.814023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.814062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.814424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.814464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.814773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.814786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.815013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.815026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.815256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.815269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.815571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.815627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.815927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.815966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.816286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.816327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.816693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.816734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.817105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.817146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.817507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.817548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.817846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.817859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.818167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.818207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.818430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.818471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.818705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.818718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.819023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.819037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.819331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.819344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.819627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.819640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.819952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.819965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.820199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.820213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.820441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.820454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.820628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.820641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.820820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.820835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.821137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.821150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.821376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.821390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.821570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.821588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.821870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.821883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.822180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.822193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.822503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.822517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.822815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.822829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.811 qpair failed and we were unable to recover it. 00:40:37.811 [2024-06-10 11:49:02.823112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.811 [2024-06-10 11:49:02.823125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.823234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.823247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.823527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.823540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.823792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.823806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.824039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.824052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.824263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.824276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.824509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.824522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.824752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.824765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.824935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.824948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.825256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.825269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.825507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.825520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.825737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.825750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.825984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.825997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.826173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.826186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.826344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.826358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.826571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.826590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.826891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.826904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.827204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.827217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.827523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.827536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.827714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.827728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.828008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.828021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.828326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.828339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.828630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.828644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.828926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.828939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.829172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.829186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.829419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.829433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.829738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.829752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.829985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.829998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.830213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.830226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.830509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.830522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.830820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.830834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.831090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.831104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.831407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.831422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.831698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.831712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.812 [2024-06-10 11:49:02.832039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.812 [2024-06-10 11:49:02.832052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.812 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.832287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.832300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.832592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.832606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.832920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.832933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.833149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.833161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.833425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.833439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.833669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.833683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.833792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.833804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.834033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.834046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.834377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.834390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.834718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.834731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.834901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.834914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.835152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.835165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.835497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.835510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.835726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.835739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.835956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.835969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.836264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.836277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.836526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.836539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.836821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.836834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.836997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.837010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.837258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.837271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.837504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.837517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.837753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.837766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.837994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.838007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.838294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.838308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.838556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.838569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.838846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.838859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.839094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.839107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.839342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.839355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.839658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.839671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.839924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.839937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.840122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.840135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.840368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.813 [2024-06-10 11:49:02.840381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.813 qpair failed and we were unable to recover it. 00:40:37.813 [2024-06-10 11:49:02.840690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.840704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.841007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.841020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.841310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.841323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.841625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.841638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.841792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.841805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.842115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.842130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.842360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.842373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.842676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.842690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.842915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.842928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.843172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.843185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.843475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.843488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.843715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.843728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.843902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.843915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.844100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.844113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.844346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.844360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.844579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.844593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.844924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.844938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.845166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.845179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.845474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.845488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.845703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.845717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.845899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.845912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.846144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.846158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.846368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.846381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.846680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.846693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.846996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.847009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.847248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.847261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.847491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.814 [2024-06-10 11:49:02.847504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.814 qpair failed and we were unable to recover it. 00:40:37.814 [2024-06-10 11:49:02.847750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.847763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.847989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.848002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.848299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.848313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.848478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.848491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.848672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.848685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.848865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.848879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.849030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.849043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.849337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.849350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.849572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.849590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.849804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.849818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.850057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.850070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.850248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.850260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.850486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.850499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.850727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.850741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.850954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.850968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.851199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.851212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.851441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.851454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.851602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.851622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.851903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.851919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.852076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.852089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.852240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.852252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.852551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.852564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.852793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.852806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.853024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.853037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.853262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.853275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.853556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.853569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.853791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.853805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.854040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.854053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.854230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.854243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.854399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.854413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.854634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.854647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.854871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.854884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.855101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.855114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.855364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.855377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.855673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.855688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.855857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.815 [2024-06-10 11:49:02.855873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.815 qpair failed and we were unable to recover it. 00:40:37.815 [2024-06-10 11:49:02.856217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.816 [2024-06-10 11:49:02.856230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.816 qpair failed and we were unable to recover it. 00:40:37.816 [2024-06-10 11:49:02.856470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.816 [2024-06-10 11:49:02.856489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.816 qpair failed and we were unable to recover it. 00:40:37.816 [2024-06-10 11:49:02.856735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.816 [2024-06-10 11:49:02.856752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.816 qpair failed and we were unable to recover it. 00:40:37.816 [2024-06-10 11:49:02.856925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.816 [2024-06-10 11:49:02.856944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.816 qpair failed and we were unable to recover it. 00:40:37.816 [2024-06-10 11:49:02.857267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:37.816 [2024-06-10 11:49:02.857283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:37.816 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.857500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.857514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.857737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.857751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.857900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.857915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.858199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.858212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.858379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.858393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.858678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.858692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.858923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.858936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.859165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.859178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.859410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.859423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.859589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.859602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.859834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.859847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.860060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.860073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.860247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.860260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.860541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.860554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.860853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.860866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.861109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.861122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.861414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.861428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.861613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.861627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.861946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.861959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.862262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.862275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.862440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.862453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.862600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.862612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.862957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.862971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.863189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.863203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.863488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.863502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.863675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.863690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.863992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.864011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.864237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.864251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.864553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.864567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.864756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.864773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.865076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.865089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.865323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.865336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.865567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.865584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.865799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.865812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.866049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.866062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.866295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.866309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.866545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.866558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.866713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.866726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.867018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.867031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.152 [2024-06-10 11:49:02.867269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.152 [2024-06-10 11:49:02.867282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.152 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.867616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.867634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.867864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.867882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.868169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.868183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.868344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.868357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.868596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.868613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.868861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.868875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.869103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.869116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.869446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.869460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.869706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.869720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.869877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.869891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.870058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.870071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.870297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.870310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.870548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.870561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.870784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.870797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.870961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.870974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.871140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.871154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.871392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.871405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.871709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.871723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.872001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.872015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.872343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.872357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.872524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.872537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.872771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.872784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.873001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.873014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.873325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.873339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.873550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.873563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.873817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.873831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.874062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.874075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.874374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.874387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.874622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.874635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.874918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.874931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.875214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.875229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.875541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.875555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.875862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.875876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.876157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.876170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.876473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.876486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.876699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.876713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.876878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.876891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.877202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.877216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.877412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.877425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.877731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.877745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.877998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.878011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.878242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.878255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.878436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.878451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.878695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.153 [2024-06-10 11:49:02.878709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.153 qpair failed and we were unable to recover it. 00:40:38.153 [2024-06-10 11:49:02.878994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.879010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.879233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.879245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.879472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.879484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.879737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.879750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.880042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.880056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.880272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.880285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.880516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.880529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.880707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.880720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.880955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.880969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.881261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.881275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.881431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.881444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.881615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.881628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.881862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.881875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.882178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.882190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.882405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.882418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.882652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.882666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.882966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.882979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.883234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.883246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.883529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.883542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.883828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.883841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.884120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.884133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.884414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.884427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.884657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.884670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.884988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.885002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.885167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.885181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.885428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.885441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.885591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.885605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.885769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.885782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.885995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.886009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.886290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.886302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.886599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.886612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.886941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.886954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.887210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.887223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.887461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.887475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.887666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.887679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.887984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.887997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.888278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.888291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.888593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.888606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.888832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.888845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.889013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.889025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.889308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.889324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.889549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.889562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.889778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.889791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.890027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.890040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.154 [2024-06-10 11:49:02.890322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.154 [2024-06-10 11:49:02.890335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.154 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.890566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.890582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.890814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.890828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.891060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.891073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.891378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.891390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.891689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.891702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.891851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.891864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.892038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.892051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.892395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.892408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.892570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.892587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.892821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.892834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.892999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.893012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.893317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.893330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.893544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.893557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.893815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.893829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.893917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.893929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.894163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.894177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.894457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.894470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.894694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.894707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.894989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.895002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.895306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.895319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.895496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.895508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.895738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.895751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.896042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.896055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.896266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.896280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.896527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.896540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.896718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.896731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.896958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.896972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.897277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.897290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.897382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.897394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.897676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.897690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.897904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.897917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.898143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.898156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.898396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.898409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.898712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.898725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.899008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.899021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.899302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.899317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.899548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.899561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.899800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.899813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.900097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.900110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.155 [2024-06-10 11:49:02.900347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.155 [2024-06-10 11:49:02.900360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.155 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.900595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.900608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.900837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.900850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.901040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.901052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.901290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.901303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.901517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.901530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.901811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.901825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.902123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.902136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.902417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.902430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.902734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.902747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.902910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.902923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.903164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.903177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.903469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.903481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.903715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.903728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.903914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.903928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.904227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.904240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.904474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.904488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.904794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.904808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.905040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.905053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.905308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.905321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.905484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.905497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.905782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.905795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.906083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.906096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.906333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.906346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.906629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.906643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.906946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.906960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.907195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.907208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.907365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.907378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.907562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.907580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.907862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.907875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.908155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.908168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.908391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.908404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.908571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.908598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.908829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.908842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.909015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.909028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.909239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.909252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.909492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.909507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.909743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.909757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.909918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.909931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.910119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.910132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.910304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.910317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.910622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.910635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.910881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.156 [2024-06-10 11:49:02.910920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.156 qpair failed and we were unable to recover it. 00:40:38.156 [2024-06-10 11:49:02.911281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.911321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.911705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.911718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.911939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.911979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.912325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.912364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.912654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.912693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.913035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.913075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.913384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.913424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.913795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.913836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.914091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.914104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.914332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.914345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.914583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.914597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.914827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.914840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.915130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.915169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.915390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.915430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.915648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.915662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.915815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.915854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.916160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.916200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.916528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.916568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.916875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.916915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.917134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.917173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.917458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.917498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.917877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.917917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.918264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.918304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.918543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.918602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.918970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.919009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.919322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.919362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.919600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.919613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.919918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.919931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.920183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.920196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.920357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.920370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.920617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.920658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.921062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.921106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.921452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.921507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.921748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.921796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.922091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.922103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.922355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.922368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.922641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.922654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.922962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.922976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.923257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.923270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.923501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.923514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.923794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.923807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.924037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.924050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.924350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.924363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.924668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.924682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.924866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.924879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.925161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.925174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.925417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.925430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.157 qpair failed and we were unable to recover it. 00:40:38.157 [2024-06-10 11:49:02.925596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.157 [2024-06-10 11:49:02.925609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.925891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.925906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.926145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.926160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.926463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.926476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.926660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.926674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.926979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.926992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.927202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.927215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.927470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.927484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.927789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.927803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.928014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.928027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.928347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.928360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.928586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.928604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.928783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.928797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.929115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.929129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.929415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.929429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.929682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.929696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.929913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.929926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.930206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.930224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.930462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.930478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.930810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.930824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.931133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.931147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.931394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.931411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.931642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.931658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.931816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.931829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.931976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.931993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.932349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.932363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.932581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.932597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.932824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.932838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.933064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.933078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.933408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.933421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.933655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.933668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.933848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.933861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.934031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.934044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.934298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.934312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.934542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.934555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.934841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.934855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.935101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.935115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.935394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.935433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.935789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.935802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.936086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.936100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.936263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.936276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.936498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.936537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.936918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.936999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.937315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.937358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.937709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.937753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.938100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.938141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.938455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.938494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.938776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.158 [2024-06-10 11:49:02.938797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.158 qpair failed and we were unable to recover it. 00:40:38.158 [2024-06-10 11:49:02.939044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.939064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.939241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.939262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.939464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.939478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.939704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.939744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.939969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.940008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.940377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.940454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.940762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.940783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.941132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.941152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.941477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.941516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.941845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.941886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.942230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.942269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.942622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.942663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.943017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.943057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.943425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.943464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.943740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.943760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.944073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.944113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.944478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.944518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.944890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.944938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.945298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.945347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.945703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.945744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.946020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.946040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.946269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.946289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.946612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.946652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.946951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.946992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.947200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.947240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.947607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.947647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.947920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.947960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.948306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.948346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.948638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.948678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.949039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.949079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.949467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.949506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.949856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.949898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.950263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.950302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.950648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.950689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.951061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.951101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.951391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.951430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.951781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.951829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.952153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.952193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.952502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.952541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.952770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.952810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.953111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.953150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.953516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.159 [2024-06-10 11:49:02.953555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.159 qpair failed and we were unable to recover it. 00:40:38.159 [2024-06-10 11:49:02.953811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.953852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.954196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.954236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.954512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.954552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.954863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.954883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.955065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.955086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.955290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.955330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.955720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.955778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.956086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.956125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.956476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.956496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.956820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.956840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.957144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.957183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.957405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.957446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.957792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.957833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.958113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.958153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.958511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.958551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.958854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.958899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.959211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.959256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.959545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.959597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.959891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.959931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.960204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.960244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.960523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.960562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.960740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.960760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.961074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.961094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.961424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.961474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.961708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.961749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.962021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.962061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.962335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.962375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.962676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.962716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.963086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.963125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.963469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.963509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.963801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.963821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.964109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.964129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.964310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.964330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.964582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.964603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.964867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.964887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.965157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.965198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.965487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.965527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.965800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.965821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.966005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.966025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.966260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.966281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.966549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.966598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.966913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.966953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.967295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.967334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.967638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.967679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.968025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.968065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.968363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.968402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.968753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.968793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.160 [2024-06-10 11:49:02.969138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.160 [2024-06-10 11:49:02.969178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.160 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.969418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.969458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.969671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.969691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.969989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.970009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.970281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.970301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.970569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.970594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.970836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.970856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.971158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.971198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.971555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.971607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.971963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.972009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.972304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.972344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.972633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.972673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.972843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.972883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.973225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.973264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.973563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.973624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.973967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.974007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.974314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.974353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.974723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.974763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.974998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.975038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.975247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.975286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.975528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.975568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.975912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.975932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.976129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.976149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.976449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.976469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.976766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.976787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.977091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.977131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.977411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.977450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.977797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.977845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.978132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.978172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.978387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.978427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.978724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.978764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.979126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.979166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.979506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.979546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.979857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.979897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.980261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.980301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.980522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.980562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.980924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.980966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.981331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.981371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.981688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.981729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.982010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.982049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.982396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.982435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.982779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.982827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.983154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.983194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.983539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.983586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.983741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.983782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.984126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.984165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.984535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.984574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.984965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.985005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.985287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.985327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.985689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.985713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.986018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.161 [2024-06-10 11:49:02.986058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.161 qpair failed and we were unable to recover it. 00:40:38.161 [2024-06-10 11:49:02.986335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.986375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.986718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.986758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.987126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.987166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.987481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.987521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.987776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.987818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.988181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.988201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.988506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.988546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.988777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.988818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.989132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.989171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.989461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.989481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.989739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.989759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.990105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.990145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.990421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.990461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.990750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.990790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.991058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.991078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.991327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.991347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.991620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.991641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.991909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.991929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.992251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.992271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.992502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.992542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.992762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.992802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.993168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.993209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.993563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.993632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.993922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.993963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.994255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.994295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.994664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.994705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.995008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.995048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.995363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.995402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.995751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.995771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.996072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.996110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.996421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.996460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.996798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.996839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.997183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.997222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.997496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.997536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.997830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.997871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.998157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.998197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.998480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.998520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.998875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.998916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.999206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.999251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.999602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.999642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:02.999927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:02.999966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:03.000188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:03.000228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:03.000598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:03.000638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:03.000915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:03.000955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:03.001300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:03.001340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:03.001704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:03.001745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:03.002030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:03.002050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:03.002376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:03.002415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:03.002712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:03.002752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:03.003108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:03.003147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:03.003443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.162 [2024-06-10 11:49:03.003482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.162 qpair failed and we were unable to recover it. 00:40:38.162 [2024-06-10 11:49:03.003826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.003867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.004197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.004237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.004461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.004500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.004885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.004926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.005224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.005263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.005559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.005607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.005938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.005977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.006291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.006331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.006703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.006743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.007035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.007074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.007402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.007442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.007735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.007776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.007992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.008031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.008393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.008434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.008728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.008768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.009120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.009160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.009456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.009496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.009899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.009939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.010175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.010216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.010509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.010548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.010850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.010891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.011168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.011208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.011433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.011473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.011759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.011800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.012130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.012170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.012491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.012531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.012907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.012927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.013157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.013177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.013349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.013370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.013621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.013661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.014041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.014080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.014425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.014464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.014755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.014795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.015163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.015202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.015513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.015552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.015860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.015901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.016244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.016284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.016513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.016552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.016907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.016947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.017317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.017357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.017609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.017650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.018027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.018067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.018428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.018468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.018834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.018874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.019218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.019258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.019555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.019602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.019968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.163 [2024-06-10 11:49:03.019988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.163 qpair failed and we were unable to recover it. 00:40:38.163 [2024-06-10 11:49:03.020250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.020270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.020587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.020629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.020909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.020948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.021162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.021182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.021506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.021545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.021918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.021970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.022204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.022225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.022532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.022595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.022918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.022958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.023297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.023317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.023616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.023637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.023957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.023997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.024275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.024315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.024674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.024715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.024890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.024930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.025239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.025281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.025537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.025585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.025901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.025952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.026286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.026326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.026599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.026639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.026884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.026924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.027222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.027263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.027607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.027647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.028015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.028066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.028294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.028315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.028559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.028583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.028825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.028845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.028956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.028975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.029221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.029261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.029554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.029615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.029910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.029950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.030229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.030269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.030549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.030597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.030942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.030982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.031349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.031389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.031626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.031667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.032051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.032091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.032410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.032450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.032666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.032706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.032991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.033031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.033313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.033333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.033655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.033676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.034002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.034041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.034409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.034449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.034814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.034854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.035145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.035165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.035334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.035354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.035597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.035644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.035884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.035924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.036263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.036283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.036595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.036635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.036975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.036995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.037304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.037343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.037647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.164 [2024-06-10 11:49:03.037688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.164 qpair failed and we were unable to recover it. 00:40:38.164 [2024-06-10 11:49:03.038038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.038078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.038370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.038410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.038773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.038813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.039099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.039139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.039480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.039521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.039837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.039878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.040236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.040276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.040639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.040680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.040980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.041019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.041355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.041394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.041685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.041726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.042089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.042129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.042491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.042531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.042828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.042868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.043148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.043193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.043428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.043448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.043654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.043695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.044046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.044086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.044332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.044353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.044534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.044555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.044807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.044827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.045075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.045095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.045421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.045461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.045756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.045797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.046034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.046055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.046295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.046316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.046641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.046681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.047045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.047085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.047467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.047507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.047862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.047904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.048265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.048304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.048678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.048719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.049012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.049033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.049297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.049353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.049598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.049639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.049964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.050005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.050370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.050410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.050771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.050811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.051114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.051134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.051454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.051493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.051860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.051901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.052134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.052154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.052388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.052408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.052585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.052606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.052924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.052945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.053250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.053290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.053652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.053693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.054040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.054060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.054407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.054447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.054689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.054729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.054957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.054996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.055228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.055268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.055562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.055625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.055990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.056030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.165 [2024-06-10 11:49:03.056369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.165 [2024-06-10 11:49:03.056389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.165 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.056667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.056708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.056951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.056990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.057332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.057372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.057651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.057691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.058051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.058093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.058465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.058505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.058796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.058836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.059182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.059223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.059526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.059565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.059968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.060008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.060273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.060293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.060610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.060651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.060926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.060966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.061308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.061348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.061567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.061615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.061904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.061924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.062229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.062268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.062610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.062651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.062982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.063023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.063361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.063401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.063624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.063664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.064044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.064084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.064375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.064415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.064693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.064735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.065092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.065143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.065378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.065418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.065761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.065801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.066150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.066190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.066490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.066530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.066895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.066936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.067305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.067325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.067587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.067608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.067889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.067929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.068266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.068305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.068592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.068631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.068974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.069011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.069234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.069273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.069555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.069606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.069925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.069965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.070303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.070324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.070596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.070617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.070886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.070906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.071076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.071097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.071352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.071376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.071557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.071589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.071900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.071920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.072242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.072261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.166 qpair failed and we were unable to recover it. 00:40:38.166 [2024-06-10 11:49:03.072580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.166 [2024-06-10 11:49:03.072601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.072710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.072729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.072912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.072932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.073177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.073197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.073390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.073410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.073646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.073666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.073902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.073922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.074116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.074136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.074362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.074382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.074703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.074724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.075041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.075061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.075255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.075281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.075534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.075554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.075737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.075757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.076057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.076077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.076311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.076331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.076511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.076531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.076847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.076867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.077098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.077118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.077313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.077332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.077653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.077673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.078006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.078026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.078271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.078292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.078595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.078615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.078872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.078893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.079233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.079254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.079451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.079471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.079779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.079800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.080041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.080065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.080368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.080390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.080716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.080736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.080969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.080990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.081242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.081269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.081487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.081508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.081698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.081718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.081890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.081911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.082097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.082119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.082307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.082333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.082598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.082619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.082853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.082874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.083052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.083072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.083279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.083299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.083484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.083504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.083826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.083846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.084019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.084040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.084276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.084296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.084529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.084551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.084799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.084819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.085067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.085087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.085333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.085353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.085605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.085625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.085874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.085898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.086229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.086251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.086493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.086514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.086760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.086781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.087076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.087096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.087347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.167 [2024-06-10 11:49:03.087367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.167 qpair failed and we were unable to recover it. 00:40:38.167 [2024-06-10 11:49:03.087629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.087649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.087920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.087939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.088124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.088144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.088453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.088474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.088830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.088850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.089016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.089036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.089206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.089226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.089521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.089541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.089784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.089804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.090051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.090071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.090366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.090386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.090710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.090731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.090930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.090950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.091198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.091218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.091538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.091558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.091742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.091763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.092012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.092032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.092198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.092218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.092474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.092494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.092738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.092759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.092935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.092957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.093206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.093227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.093453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.093473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.093718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.093738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.094038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.094058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.094235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.094261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.094560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.094599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.094877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.094898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.095169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.095189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.095402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.095424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.095653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.095674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.095856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.095876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.096132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.096153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.096390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.096410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.096676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.096702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.097004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.097024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.097373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.097394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.097699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.097720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.098045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.098065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.098265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.098285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.098594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.098614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.098807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.098827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.099076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.099097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.099268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.099288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.099470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.099490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.099673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.099693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.099929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.099950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.100241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.100261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.100517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.100537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.100806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.100827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.101153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.101173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.101422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.101442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.101712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.101732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.101963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.101983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.102151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.102171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.102370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.102391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.102595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.168 [2024-06-10 11:49:03.102615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.168 qpair failed and we were unable to recover it. 00:40:38.168 [2024-06-10 11:49:03.102813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.102834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.103064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.103084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.103273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.103294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.103565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.103591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.103914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.103935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.104100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.104120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.104423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.104443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.104694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.104715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.104894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.104914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.105238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.105258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.105454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.105474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.105669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.105693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.105926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.105946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.106192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.106212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.106392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.106412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.106694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.106714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.106971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.106991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.107218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.107241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.107403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.107423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.107619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.107639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.107802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.107823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.108139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.108159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.108328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.108348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.108646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.108666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.108988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.109008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.109326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.109346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.109593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.109614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.109851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.109871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.110094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.110114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.110414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.110434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.110754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.110775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.111100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.111120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.111362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.111382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.111702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.111723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.111973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.111995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.112323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.112343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.112612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.112633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.112804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.112825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.113015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.113035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.113260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.113280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.113511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.113532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.113830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.113851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.114156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.114176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.114412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.114432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.114623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.114644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.114830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.114850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.115027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.115047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.115292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.115314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.115486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.115507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.115754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.115778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.116106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.116127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.169 [2024-06-10 11:49:03.116389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.169 [2024-06-10 11:49:03.116409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.169 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.116668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.116688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.116935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.116956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.117228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.117249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.117496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.117515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.117757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.117778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.118040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.118064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.118315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.118336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.118569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.118595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.118789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.118809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.118984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.119005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.119180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.119200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.119533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.119553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.119895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.119915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.120158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.120178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.120360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.120380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.120658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.120680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.120881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.120901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.121137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.121157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.121403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.121423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.121664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.121685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.122006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.122026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.122239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.122259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.122450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.122470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.122722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.122742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.122906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.122926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.123106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.123126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.123370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.123391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.123566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.123591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.123837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.123857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.124176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.124196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.124443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.124463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.124785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.124806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.125105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.125126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.125293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.125313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.125550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.125570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.125869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.125888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.126131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.126156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.126318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.126340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.126623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.126646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.126850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.126877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.127198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.127219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.127453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.127473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.127681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.127702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.127956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.127976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.128143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.128163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.128403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.128429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.128618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.128640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.128870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.128890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.129190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.129210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.129455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.129476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.129720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.129740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.129975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.129995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.130251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.130272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.130451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.130470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.130720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.130740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.130977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.130996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.170 [2024-06-10 11:49:03.131229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.170 [2024-06-10 11:49:03.131250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.170 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.131494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.131513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.131763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.131784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.132019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.132039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.132298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.132318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.132490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.132510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.132758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.132778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.133022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.133042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.133272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.133292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.133557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.133581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.133825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.133846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.134118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.134138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.134311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.134331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.134558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.134592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.134859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.134879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.135051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.135071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.135427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.135448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.135680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.135701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.135878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.135898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.136163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.136183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.136354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.136374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.136633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.136653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.136849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.136869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.137109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.137129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.137368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.137387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.137686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.137707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.137947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.137967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.138229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.138248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.138585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.138606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.138850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.138873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.139152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.139172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.139405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.139426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.139598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.139619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.139892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.139913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.140127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.140147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.140470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.140490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.140603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.140622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.140864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.140884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.141115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.141134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.141357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.141378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.141558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.141584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.141820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.141842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.142083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.142104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.142445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.142466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.142715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.142736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.142983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.143004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.143165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.143185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.143485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.143505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.143802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.143823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.144145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.144165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.144346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.144365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.144620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.144640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.144940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.144960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.145203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.145223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.145451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.145471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.145729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.145749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.145935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.145956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.146217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.146236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.146397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.146417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.171 [2024-06-10 11:49:03.146604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.171 [2024-06-10 11:49:03.146625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.171 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.146875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.146895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.147068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.147088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.147263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.147283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.147579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.147600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.147833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.147853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.148101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.148122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.148357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.148377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.148635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.148655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.148818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.148838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.149158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.149181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.149343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.149363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.149604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.149624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.149801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.149821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.150070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.150090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.150375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.150395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.150562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.150600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.150854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.150874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.151106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.151126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.151287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.151307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.151482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.151502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.151801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.151821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.152049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.152069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.152397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.152417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.152697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.152717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.152962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.152982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.153156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.153176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.153364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.153384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.153571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.153595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.153915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.153935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.154182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.154202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.154314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.154333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.154605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.154627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.154944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.154964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.155203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.155223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.155393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.155413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.155587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.155607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.155911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.155931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.156180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.156200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.156392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.156413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.156669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.156692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.156933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.156955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.157120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.157140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.157334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.157354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.157627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.157648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.157830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.157851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.158185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.158205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.158458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.158484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.158721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.158741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.159022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.159062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.159265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.159304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.159585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.159606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.159932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.159952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.160201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.160221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.160458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.172 [2024-06-10 11:49:03.160478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.172 qpair failed and we were unable to recover it. 00:40:38.172 [2024-06-10 11:49:03.160649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.160670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.160994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.161033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.161317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.161337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.161654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.161675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.161979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.162019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.162257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.162296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.162587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.162607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.162791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.162811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.163130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.163150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.163331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.163371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.163593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.163647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.164003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.164042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.164325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.164364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.164704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.164744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.165039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.165078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.165457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.165497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.165746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.165787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.166022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.166061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.166379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.166418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.166786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.166826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.167158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.167197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.167562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.167611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.167858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.167903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.168197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.168237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.168454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.168493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.168723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.168764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.169050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.169090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.169351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.169371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.169609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.169630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.169868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.169907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.170214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.170254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.170497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.170517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.170629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.170648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.170898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.170918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.171128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.171148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.171448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.171487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.171708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.171749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.172120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.172160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.172504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.172544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.172850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.172890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.173196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.173236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.173515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.173554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.173896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.173936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.174214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.174253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.174632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.174673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.174967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.174988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.175221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.175241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.176393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.176428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.176778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.176800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.177131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.177172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.177452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.177492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.177779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.177819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.178110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.178158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.178466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.178505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.178735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.178775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.179069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.179109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.180628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.180663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.181000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.181022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.181321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.181341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.181527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.181569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.181891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.181931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.182224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.182264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.182628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.182677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.173 [2024-06-10 11:49:03.183091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.173 [2024-06-10 11:49:03.183132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.173 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.183475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.183515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.183756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.183797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.184089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.184128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.184394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.184414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.184641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.184662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.184994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.185034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.185323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.185363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.185757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.185777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.186045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.186066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.186260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.186280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.186534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.186554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.186756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.186777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.187079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.187099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.187276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.187316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.187477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.187518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.187845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.187886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.189316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.189351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.189587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.189609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.189940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.189980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.190347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.190387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.190625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.190648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.190833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.190853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.191169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.191209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.191455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.191495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.191867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.191907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.192300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.192340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.192681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.192721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.193071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.193110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.193344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.193364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.193612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.193652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.194023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.194064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.194411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.194450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.194755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.194796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.195078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.195118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.195341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.195380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.195676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.195697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.196021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.196041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.196233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.196253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.196502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.196525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.196776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.196796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.196936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.196956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.197186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.197206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.197387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.197407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.197727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.197747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.197997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.198017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.198294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.198313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.198586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.198607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.198791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.198811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.199135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.199155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.199342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.199362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.199638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.199659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.199829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.199849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.200084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.200104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.200271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.200291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.200557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.200581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.200880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.200900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.201238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.201258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.201523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.201544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.201818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.201839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.202103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.202123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.202442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.202462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.202644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.202665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.202859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.202879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.174 [2024-06-10 11:49:03.203194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.174 [2024-06-10 11:49:03.203215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.174 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.203475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.203495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.203764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.203784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.203911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.203931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.204181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.204201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.204304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.204324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.204516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.204536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.204805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.204825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.204994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.205014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.205335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.205355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.205523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.205543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.205737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.205762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.206061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.206081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.206330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.206349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.206541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.206561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.206864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.206887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.207223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.207243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.207440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.207459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.207761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.207781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.207968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.207988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.208244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.208274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.208546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.208561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.208795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.208809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.209039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.209052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.209280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.209294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.209507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.209520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.209689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.209702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.209932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.209945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.210101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.210115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.210399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.210413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.210585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.210599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.210783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.210796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.210965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.210978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.211257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.211271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.211553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.211566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.211879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.211892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.212041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.212054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.212243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.212255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.212474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.212487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.212656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.212670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.212828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.212841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.213071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.213084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.213330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.213343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.213506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.213520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.213692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.213706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.213860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.213873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.214153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.214166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.214398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.214412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.214643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.214656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.214877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.214891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.215069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.215082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.215299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.215312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.215558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.215571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.215831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.215845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.216058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.216071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.216306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.216321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.216485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.175 [2024-06-10 11:49:03.216498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.175 qpair failed and we were unable to recover it. 00:40:38.175 [2024-06-10 11:49:03.216720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.216733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.216906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.216919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.217137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.217151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.217299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.217312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.217425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.217438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.217597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.217610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.217758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.217771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.218018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.218031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.218176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.218188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.218354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.218367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.218588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.218602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.218836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.218850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.219089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.219102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.219283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.219295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.219532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.219545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.219767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.219780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.219993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.220006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.220284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.220297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.220443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.220456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.220753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.220767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.220994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.221007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.221234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.221247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.221476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.221490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.221772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.221786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.221943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.221957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.222185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.222198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.222411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.222424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.222569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.222586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.222820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.222834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.223067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.223080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.223320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.223333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.223634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.223647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.223790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.223803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.223956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.223969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.224289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.224303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.224533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.224546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.224759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.224773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.224942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.224956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.225195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.225211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.225448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.225461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.225695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.225708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.225924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.225937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.226082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.226095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.226406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.226419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.226583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.226597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.226813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.226826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.227042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.227055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.227273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.227286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.227504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.227517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.227663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.227676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.227958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.227971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.228131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.228144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.228379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.228392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.228618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.228632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.228915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.228928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.229087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.229099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.229406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.229419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.229700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.229713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.229945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.229958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.230172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.230184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.230284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.230296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.230528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.230541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.230752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.230765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.230936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.230949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.231243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.231256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.231440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.176 [2024-06-10 11:49:03.231453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.176 qpair failed and we were unable to recover it. 00:40:38.176 [2024-06-10 11:49:03.231683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.231697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.231926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.231939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.232236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.232249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.232413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.232426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.232595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.232609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.232841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.232854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.233098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.233111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.233394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.233407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.233640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.233654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.233885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.233898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.234169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.234182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.234412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.234426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.234708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.234723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.234957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.234970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.235115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.235128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.235429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.235443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.235671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.235685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.235901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.235914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.236129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.236142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.236320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.236333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.236643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.236656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.236807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.236820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.237033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.237046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.237354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.237368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.237621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.237634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.237912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.237925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.238140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.238153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.238313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.238326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.238491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.238504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.238812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.238825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.239134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.239148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.239313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.239326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.239481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.239494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.239744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.239758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.239905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.239918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.240141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.240154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.240434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.240447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.240743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.240756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.240970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.240983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.241304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.241317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.241550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.241564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.241815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.241828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.242107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.242121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.242296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.242309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.242527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.242541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.242771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.242785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.243010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.243023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.243326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.243339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.243554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.243567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.243785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.243798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.244025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.244038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.244196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.244209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.244438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.244452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.177 [2024-06-10 11:49:03.244736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.177 [2024-06-10 11:49:03.244750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.177 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.245003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.245016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.245320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.245333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.245610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.245623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.245777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.245790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.246095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.246108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.246349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.246362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.246542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.246559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.246778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.246792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.247019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.247033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.247337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.247352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.247589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.247604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.247892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.247907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.248076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.248090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.248353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.248367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.248585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.248599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.248831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.248845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.249126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.249140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.249306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.249319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.249599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.249614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.249900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.249915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.250221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.250235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.250464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.250478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.250811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.250825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.251113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.251126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.251354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.251367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.251584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.251597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.453 qpair failed and we were unable to recover it. 00:40:38.453 [2024-06-10 11:49:03.251834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.453 [2024-06-10 11:49:03.251847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.252096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.252109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.252415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.252428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.252718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.252731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.253012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.253025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.253209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.253223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.253438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.253451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.253561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.253573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.253808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.253821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.254122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.254135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.254366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.254379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.254682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.254696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.254932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.254947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.255105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.255119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.255452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.255465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.255693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.255706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.255953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.255966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.256271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.256284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.256509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.256523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.256853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.256867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.257146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.257159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.257390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.257403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.257618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.257631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.257872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.257885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.258101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.258114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.258416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.258429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.258597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.258610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.258894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.258907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.259065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.259077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.259361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.259374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.259587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.259601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.259896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.259909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.260220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.260233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.260464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.260477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.260644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.260657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.260882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.260895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.261222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.261235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.261402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.454 [2024-06-10 11:49:03.261415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.454 qpair failed and we were unable to recover it. 00:40:38.454 [2024-06-10 11:49:03.261698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.261712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.261995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.262008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.262236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.262249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.262485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.262498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.262798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.262811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.263066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.263079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.263308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.263321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.263648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.263662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.263886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.263899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.264082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.264095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.264403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.264416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.264698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.264711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.264927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.264940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.265222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.265235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.265535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.265550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.265787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.265800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.266081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.266094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.266375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.266388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.266693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.266706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.266923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.266936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.267162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.267175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.267409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.267421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.267722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.267742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.267969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.267982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.268289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.268302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.268534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.268547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.268888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.268902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.269132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.269145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.269377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.269390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.269697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.269710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.269876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.269889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.270193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.270206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.270530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.270543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.270697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.270710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.270996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.271009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.271243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.271256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.271541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.455 [2024-06-10 11:49:03.271554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.455 qpair failed and we were unable to recover it. 00:40:38.455 [2024-06-10 11:49:03.271875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.271888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.272170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.272183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.272416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.272429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.272736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.272750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.273085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.273099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.273394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.273407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.273636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.273650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.273932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.273945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.274254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.274267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.274519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.274532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.274782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.274796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.275039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.275052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.275355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.275368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.275673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.275686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.275971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.275984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.276284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.276297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.276601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.276614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.276896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.276912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.277074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.277087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.277320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.277333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.277563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.277579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.277820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.277833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.278070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.278083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.278375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.278388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.278552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.278565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.278789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.278802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.278969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.278982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.279285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.279298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.279596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.279609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.279823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.279836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.280063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.280076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.280234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.280247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.280474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.280487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.280718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.280732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.281048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.281061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.281237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.281250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.456 qpair failed and we were unable to recover it. 00:40:38.456 [2024-06-10 11:49:03.281550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.456 [2024-06-10 11:49:03.281562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.281848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.281861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.282036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.282049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.282381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.282396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.282680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.282693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.282931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.282944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.283198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.283217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.283521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.283534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.283706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.283719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.283871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.283884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.284047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.284060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.284293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.284306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.284471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.284484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.284764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.284777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.285058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.285071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.285318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.285331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.285430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.285443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.285538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.285551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.285783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.285797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.286041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.286054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.286353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.286365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.286532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.286548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.286727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.286740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.286973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.286986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.287206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.287219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.287445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.287458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.287688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.287702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.287858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.287872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.288112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.288125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.288345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.288359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.288570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.288588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.457 [2024-06-10 11:49:03.288734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.457 [2024-06-10 11:49:03.288747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.457 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.288899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.288912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.289203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.289216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.289446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.289458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.289743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.289756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.290015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.290028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.290204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.290217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.290376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.290389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.290597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.290610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.290903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.290916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.291137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.291150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.291477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.291490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.291797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.291810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.292024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.292037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.292204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.292217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.292375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.292389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.292639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.292653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.292870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.292883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.293097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.293110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.293286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.293299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.293589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.293602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.293884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.293897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.294110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.294123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.294385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.294398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.294639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.294652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.294937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.294950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.295182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.295195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.295477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.295490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.295725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.295739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.296021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.296034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.296355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.296368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.296594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.296607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.296774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.296787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.297078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.297091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.297384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.297397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.297680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.297694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.297920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.297933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.298145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.298158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.458 [2024-06-10 11:49:03.298268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.458 [2024-06-10 11:49:03.298281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.458 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.298506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.298519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.298622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.298635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.298792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.298805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.298982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.298995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.299276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.299289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.299451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.299464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.299747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.299760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.299974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.299987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.300165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.300178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.300411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.300424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.300718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.300731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.300953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.300967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.301298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.301311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.301530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.301544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.301773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.301786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.302002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.302015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.302231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.302244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.302406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.302419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.302702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.302717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.302932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.302944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.303227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.303240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.303487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.303500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.303739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.303752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.304035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.304048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.304331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.304344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.304583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.304596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.304880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.304893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.305141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.305154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.305339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.305353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.305583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.305596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.305740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.305753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.305922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.305935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.306107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.306121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.306334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.306347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.306455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.306467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.306688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.306701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.306925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.306938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.459 [2024-06-10 11:49:03.307219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.459 [2024-06-10 11:49:03.307232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.459 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.307539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.307552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.307728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.307742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.307974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.307987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.308202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.308216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.308465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.308478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.308695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.308709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.308862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.308875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.309098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.309111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.309269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.309282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.309539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.309552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.309770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.309784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.310067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.310080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.310300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.310313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.310598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.310611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.310777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.310790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.310943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.310956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.311237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.311250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.311500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.311513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.311795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.311808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.312091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.312104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.312351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.312366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.312720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.312734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.313015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.313028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.313188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.313201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.313414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.313427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.313726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.313740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.313953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.313966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.314247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.314260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.314473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.314486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.314646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.314659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.314976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.314989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.315243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.315256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.315477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.315490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.315663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.315676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.315903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.315916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.316217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.316230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.316530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.316543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.316800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.460 [2024-06-10 11:49:03.316813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.460 qpair failed and we were unable to recover it. 00:40:38.460 [2024-06-10 11:49:03.317127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.317141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.317435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.317448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.317631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.317644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.317906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.317919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.318212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.318225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.318526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.318539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.318792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.318805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.319041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.319054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.319299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.319312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.319528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.319541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.319774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.319787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.319999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.320012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.320242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.320255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.320338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.320351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.320588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.320601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.320834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.320847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.321130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.321143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.321462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.321475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.321701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.321714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.321940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.321953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.322167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.322180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.322486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.322499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.322714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.322730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.322945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.322957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.323287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.323300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.323629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.323642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.323875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.323888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.323997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.324010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.324224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.324237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.324518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.324531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.324789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.324803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.325025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.325038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.325274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.325287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.325578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.325591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.325777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.325790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.326013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.326026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.326190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.326203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.461 [2024-06-10 11:49:03.326417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.461 [2024-06-10 11:49:03.326430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.461 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.326540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.326553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.326723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.326737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.327044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.327058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.327216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.327229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.327530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.327543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.327729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.327743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.327969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.327982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.328194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.328207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.328436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.328449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.328660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.328673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.328883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.328896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.329136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.329149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.329447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.329460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.329737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.329750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.329900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.329913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.330164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.330177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.330423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.330436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.330713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.330726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.330958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.330971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.331274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.331287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.331529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.331542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.331772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.331785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.332086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.332099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.332315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.332328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.332657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.332672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.332912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.332926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.333231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.333245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.333491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.333504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.333732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.333746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.334028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.334041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.334260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.334272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.334555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.462 [2024-06-10 11:49:03.334568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.462 qpair failed and we were unable to recover it. 00:40:38.462 [2024-06-10 11:49:03.334883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.334896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.335134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.335147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.335448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.335461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.335675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.335688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.335922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.335935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.336159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.336172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.336351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.336364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.336691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.336705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.336928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.336941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.337248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.337262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.337587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.337600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.337885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.337898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.338046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.338059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.338224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.338237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.338475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.338488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.338806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.338819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.339051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.339064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.339285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.339297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.339517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.339530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.339772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.339785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.340022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.340035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.340255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.340268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.340570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.340595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.340821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.340834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.341112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.341125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.341369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.341382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.341546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.341559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.341789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.341803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.342031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.342044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.342345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.342358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.342585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.342598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.342904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.342917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.343131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.343147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.343328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.343341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.343564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.343581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.343862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.343875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.344034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.344047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.344278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.463 [2024-06-10 11:49:03.344292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.463 qpair failed and we were unable to recover it. 00:40:38.463 [2024-06-10 11:49:03.344451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.344464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.344782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.344795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.344939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.344952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.345252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.345265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.345494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.345507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.345811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.345825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.345986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.345998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.346287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.346300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.346448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.346462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.346672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.346685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.346910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.346923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.347135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.347148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.347453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.347466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.347718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.347731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.347839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.347851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.348031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.348044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.348289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.348302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.348539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.348553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.348806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.348819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.349050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.349064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.349346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.349359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.349670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.349684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.350024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.350038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.350319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.350332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.350510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.350523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.350804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.350818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.351113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.351126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.351368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.351381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.351601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.351614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.351854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.351868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.352102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.352115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.352361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.352373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.352539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.352552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.352731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.352744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.352913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.352928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.353237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.353250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.353479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.353492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.353705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.464 [2024-06-10 11:49:03.353719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.464 qpair failed and we were unable to recover it. 00:40:38.464 [2024-06-10 11:49:03.353946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.353959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.354191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.354205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.354420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.354433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.354745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.354758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.355060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.355073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.355289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.355302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.355515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.355527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.355823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.355837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.356118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.356131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.356432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.356445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.356682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.356695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.356934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.356947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.357172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.357185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.357410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.357424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.357727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.357741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.357906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.357920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.358222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.358235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.358543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.358556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.358795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.358809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.359132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.359145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.359437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.359450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.359682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.359695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.359912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.359925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.360138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.360151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.360381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.360394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.360652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.360666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.360892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.360905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.361186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.361199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.361415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.361428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.361715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.361729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.361966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.361979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.362203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.362216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.362442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.362455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.362760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.362773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.363070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.363084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.363241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.363253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.363462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.363478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.363637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.363651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.465 [2024-06-10 11:49:03.363904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.465 [2024-06-10 11:49:03.363917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.465 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.364134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.364147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.364427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.364441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.364651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.364664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.364992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.365005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.365287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.365301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.365637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.365651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.365880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.365893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.366046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.366059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.366204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.366218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.366515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.366528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.366741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.366754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.367056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.367069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.367283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.367296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.367459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.367472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.367693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.367707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.367932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.367945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.368169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.368182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.368464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.368477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.368713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.368726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.368954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.368967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.369250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.369263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.369543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.369556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.369781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.369795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.370010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.370024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.370188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.370201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.370534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.370547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.370779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.370792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.371019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.371033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.371267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.371280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.371564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.371581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.371752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.371765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.371998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.372011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.372244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.372257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.372486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.372499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.372802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.372815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.372994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.373007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.373327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.373340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.373619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.466 [2024-06-10 11:49:03.373635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.466 qpair failed and we were unable to recover it. 00:40:38.466 [2024-06-10 11:49:03.373803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.373816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.374128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.374141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.374441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.374454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.374707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.374720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.374955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.374968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.375252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.375265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.375568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.375585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.375808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.375821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.376053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.376066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.376348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.376361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.376588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.376601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.376769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.376782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.376960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.376973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.377210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.377223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.377479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.377491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.377748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.377761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.377909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.377921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.378201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.378214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.378509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.378523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.378773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.378787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.379111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.379125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.379355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.379367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.379583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.379596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.379832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.379845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.380125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.380138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.380378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.380391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.380700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.380714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.381001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.381014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.381238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.381251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.381490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.381504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.381808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.381821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.382055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.382069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.382307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.382320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.467 qpair failed and we were unable to recover it. 00:40:38.467 [2024-06-10 11:49:03.382538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.467 [2024-06-10 11:49:03.382551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.382840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.382854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.383016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.383029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.383250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.383263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.383475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.383488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.383701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.383715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.383857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.383872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.384180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.384193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.384450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.384463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.384697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.384710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.384895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.384908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.385190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.385203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.385418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.385431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.385724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.385737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.385981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.385994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.386280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.386293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.386506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.386519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.386801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.386814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.386969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.386983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.387309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.387322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.387549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.387562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.387727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.387741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.387900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.387913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.388210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.388223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.388468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.388481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.388644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.388657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.388872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.388885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.389112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.389125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.389432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.389446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.389676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.389689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.389920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.389933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.390117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.390130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.390291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.390305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.390629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.390642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.390799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.390812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.391028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.391041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.391224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.391237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.391520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.391533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.468 [2024-06-10 11:49:03.391772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.468 [2024-06-10 11:49:03.391785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.468 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.392068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.392082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.392241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.392254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.392565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.392582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.392884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.392897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.393208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.393221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.393436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.393449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.393683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.393696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.393944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.393960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.394174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.394187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.394483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.394497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.394778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.394792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.395074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.395087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.395344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.395358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.395603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.395617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.395914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.395928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.396264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.396278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.396562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.396579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.396806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.396819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.397031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.397045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.397349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.397362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.397597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.397611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.397777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.397791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.398026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.398039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.398332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.398346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.398604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.398619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.398894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.398907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.399215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.399228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.399464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.399478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.399654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.399668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.399867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.399880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.400030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.400044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.400324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.400338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.400552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.400565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.400789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.400802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.401108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.401121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.401351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.401364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.401533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.401547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.401852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.469 [2024-06-10 11:49:03.401865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.469 qpair failed and we were unable to recover it. 00:40:38.469 [2024-06-10 11:49:03.402126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.402140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.402424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.402437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.402657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.402670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.402835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.402848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.402995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.403008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.403246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.403260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.403565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.403581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.403807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.403820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.404126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.404139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.404390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.404406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.404697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.404711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.405029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.405042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.405228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.405241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.405471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.405484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.405705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.405719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.406022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.406035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.406336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.406349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.406574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.406597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.406845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.406859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.407141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.407153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.407403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.407417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.407696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.407709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.407889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.407901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.408143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.408156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.408391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.408404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.408634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.408647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.408831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.408844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.409054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.409068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.409232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.409247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.409479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.409491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.409781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.409821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.410186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.410225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.410544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.410598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.410849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.410861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.411223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.411262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.411667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.411708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.412076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.412116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.412333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.412372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.470 qpair failed and we were unable to recover it. 00:40:38.470 [2024-06-10 11:49:03.412625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.470 [2024-06-10 11:49:03.412665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.413038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.413077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.413361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.413401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.413757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.413770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.414071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.414110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.414355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.414394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.414786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.414826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.415083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.415123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.415411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.415451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.415735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.415776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.416148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.416188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.416573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.416630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.416889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.416901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.417142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.417182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.417460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.417499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.417782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.417822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.418138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.418178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.418504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.418544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.418850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.418890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.419234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.419273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.419588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.419629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.419934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.419973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.420285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.420324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.420614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.420654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.420920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.420959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.421245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.421284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.421561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.421625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.421963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.422003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.422219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.422242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.422484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.422524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.422780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.422820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.423189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.471 [2024-06-10 11:49:03.423229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.471 qpair failed and we were unable to recover it. 00:40:38.471 [2024-06-10 11:49:03.423510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.423549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.423874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.423914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.424190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.424229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.424510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.424550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.424770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.424811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.425148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.425193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.425550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.425603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.425830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.425870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.426234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.426274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.426628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.426668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.426892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.426931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.427295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.427335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.427613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.427654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.427868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.427880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.428089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.428128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.428441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.428480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.428785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.428824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.429175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.429214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.429516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.429555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.429850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.429890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.430181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.430193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.430429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.430441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.430660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.430672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.430893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.430905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.431213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.431225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.431561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.431616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.431961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.432005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.432220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.432232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.432548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.432607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.432905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.432945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.433168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.433208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.433366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.433406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.433687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.433727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.434092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.434132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.434420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.434432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.434656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.434669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.434859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.434871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.435180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.472 [2024-06-10 11:49:03.435192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.472 qpair failed and we were unable to recover it. 00:40:38.472 [2024-06-10 11:49:03.435412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.435424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.435619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.435632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.435873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.435886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.436213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.436253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.436461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.436500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.436859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.436898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.437141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.437180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.437459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.437499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.437719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.437765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.438040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.438080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.438353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.438365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.438539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.438551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.438788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.438829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.439060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.439073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.439231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.439243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.439392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.439433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.439741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.439782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.440085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.440125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.440444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.440483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.440866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.440906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.441145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.441185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.441475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.441515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.441883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.441923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.442304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.442344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.442586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.442627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.442952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.442992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.443266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.443305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.443655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.443695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.443936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.443976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.444269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.444311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.444470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.444511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.444837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.444878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.445242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.445255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.445554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.445566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.445803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.445816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.446051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.446064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.446360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.446373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.446655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.446668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.473 [2024-06-10 11:49:03.446900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.473 [2024-06-10 11:49:03.446912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.473 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.447075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.447088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.447337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.447350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.447632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.447644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.447804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.447816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.448113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.448126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.448353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.448366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.448673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.448685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.448914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.448927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.449093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.449107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.449412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.449427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.449664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.449677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.449933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.449945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.450111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.450123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.450299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.450311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.450624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.450636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.450867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.450879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.451164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.451177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.451506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.451518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.451728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.451741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.451844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.451855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.452083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.452095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.452257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.452269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.452520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.452532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.452753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.452765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.452922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.452934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.453132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.453144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.453392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.453405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.453593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.453605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.453881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.453894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.454222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.454234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.454397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.454408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.454649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.454661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.454830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.454842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.455075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.455087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.455370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.455382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.455616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.455628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.455807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.455819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.456126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.474 [2024-06-10 11:49:03.456138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.474 qpair failed and we were unable to recover it. 00:40:38.474 [2024-06-10 11:49:03.456354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.456366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.456599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.456611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.456941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.456982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.457352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.457391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.457700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.457740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.458096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.458136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.458301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.458312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.458483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.458523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.458809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.458849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.459033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.459046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.459281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.459293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.459506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.459520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.459825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.459865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.460160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.460199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.460549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.460598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.460892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.460932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.461275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.461316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.461675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.461715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.462009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.462049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.462416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.462455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.462822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.462863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.463103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.463143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.463486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.463527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.463831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.463872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.464192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.464231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.464606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.464646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.465045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.465085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.465380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.465420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.465720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.465760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.466022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.466035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.466195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.466207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.466522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.466533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.466844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.466884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.467110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.467149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.467460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.467499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.467742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.467783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.468062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.468074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.468161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.475 [2024-06-10 11:49:03.468173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.475 qpair failed and we were unable to recover it. 00:40:38.475 [2024-06-10 11:49:03.468482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.468495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.468777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.468817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.469124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.469164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.469426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.469438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.469599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.469612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.469858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.469870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.470040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.470053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.470292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.470304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.470638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.470651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.470813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.470825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.471106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.471145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.471383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.471422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.471718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.471731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.471963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.471977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.472216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.472228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.472372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.472385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.472567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.472615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.472834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.472874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.473190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.473229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.473524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.473564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.473801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.473814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.474053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.474093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.474306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.474345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.474689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.474730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.475008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.475048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.475381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.475421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.475718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.475773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.476063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.476103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.476393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.476434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.476721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.476761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.476929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.476942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.477172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.477184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.477435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.476 [2024-06-10 11:49:03.477475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.476 qpair failed and we were unable to recover it. 00:40:38.476 [2024-06-10 11:49:03.477755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.477795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.478039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.478078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.478325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.478364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.478731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.478771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.479138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.479178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.479456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.479495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.479892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.479933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.480177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.480190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.480384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.480424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.480699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.480752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.481106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.481145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.481517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.481557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.481880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.481920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.482201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.482240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.482467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.482507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.482849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.482888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.483046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.483058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.483337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.483376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.483618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.483659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.483894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.483940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.484151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.484166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.484400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.484413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.484728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.484768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.484990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.485029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.485318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.485357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.485686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.485726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.486002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.486014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.486191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.486203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.486431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.486443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.486666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.486707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.486942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.486955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.487155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.487195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.487420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.487459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.488703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.488728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.489078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.489092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.489383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.489424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.489713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.489754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.490031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.490071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.477 [2024-06-10 11:49:03.490345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.477 [2024-06-10 11:49:03.490385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.477 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.490614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.490655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.490877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.490917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.491285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.491324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.491622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.491663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.492015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.492055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.492343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.492382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.492617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.492658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.492931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.492971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.493301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.493341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.493618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.493658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.493882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.493922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.494200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.494212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.494442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.494454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.494613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.494626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.494804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.494844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.495082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.495122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.495404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.495443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.495736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.495777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.496051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.496064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.496446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.496486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.496784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.496825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.497111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.497157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.497395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.497435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.497749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.497789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.498007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.498048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.498323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.498363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.498732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.498772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.499116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.499128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.499347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.499387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.499626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.499666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.499955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.499995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.500230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.500270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.500560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.500608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.500894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.500934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.501214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.501254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.501486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.501498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.501597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.501610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.501771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.478 [2024-06-10 11:49:03.501783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.478 qpair failed and we were unable to recover it. 00:40:38.478 [2024-06-10 11:49:03.501996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.502008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.502173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.502185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.502470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.502482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.502783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.502824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.503118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.503158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.503461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.503501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.503818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.503858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.504226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.504265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.504499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.504539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.504922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.504963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.505221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.505261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.505549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.505597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.505887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.505926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.506204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.506216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.506501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.506513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.506818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.506862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.507099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.507111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.507349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.507361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.507599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.507612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.507826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.507838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.508066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.508078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.508341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.508353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.508665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.508677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.508904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.508917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.509083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.509095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.509334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.509346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.509647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.509660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.509891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.509903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.510088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.510100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.510390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.510402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.510638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.510651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.510807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.510818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.511061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.511100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.511381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.511421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.511715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.511756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.512058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.512097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.512398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.512439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.479 [2024-06-10 11:49:03.512741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.479 [2024-06-10 11:49:03.512782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.479 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.513024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.513064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.513481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.513521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.513817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.513856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.514012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.514024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.514221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.514262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.514547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.514594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.514920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.514960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.515257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.515296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.515620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.515660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.515937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.515977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.516323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.516362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.516718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.516759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.517058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.517099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.517327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.517366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.517651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.517691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.517898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.517938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.518331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.518370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.518685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.518726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.519016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.519056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.519419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.519458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.519744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.519785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.520061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.520100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.520322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.520361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.520668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.520680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.520833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.520845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.521013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.521027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.521292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.521332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.521635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.521676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.521956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.521996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.522222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.522261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.522620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.522661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.522950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.480 [2024-06-10 11:49:03.522989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.480 qpair failed and we were unable to recover it. 00:40:38.480 [2024-06-10 11:49:03.523268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.523307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.523604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.523645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.523991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.524031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.524241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.524281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.524554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.524566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.525028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.525041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.525262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.525301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.525536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.525587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.525890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.525930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.526275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.526320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.526478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.526489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.526644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.526656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.526820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.526833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.526999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.527011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.527232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.527272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.527549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.527619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.527786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.527826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.528054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.528094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.528440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.528479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.528754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.528795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.529150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.529191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.529484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.529524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.529767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.529806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.530027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.530067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.530356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.530368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.530690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.530731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.530956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.530996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.531343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.531383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.531678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.531719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.532022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.532075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.532296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.532307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.532628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.532641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.532888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.532928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.533166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.533210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.533512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.533524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.533805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.533817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.534051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.534064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.534220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.481 [2024-06-10 11:49:03.534232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.481 qpair failed and we were unable to recover it. 00:40:38.481 [2024-06-10 11:49:03.534570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.534617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.534904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.534943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.535134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.535146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.535312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.535351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.535719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.535760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.536056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.536096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.536377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.536389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.536669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.536682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.536839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.536878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.537161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.537201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.537493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.537533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.537852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.537864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.538085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.538098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.538330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.538343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.538645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.538657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.538948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.538988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.539308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.539348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.539579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.539591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.539849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.539861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.540025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.540037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.482 [2024-06-10 11:49:03.540266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.482 [2024-06-10 11:49:03.540279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.482 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.540582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.540595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.540832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.540845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.541018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.541030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.541207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.541219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.541465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.541477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.541733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.541746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.541991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.542003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.542266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.542278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.542446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.542458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.542689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.542702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.542952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.542964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.543175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.543188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.543363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.543376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.543540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.543552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.543786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.543803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.544031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.544043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.544323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.544363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.544737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.544777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.545017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.545057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.545330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.545370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.545658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.545698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.545984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.546031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.546340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.546380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.546670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.546709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.546937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.546976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.547207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.547245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.547432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.547445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.547614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.547654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.547946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.547985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.548279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.548319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.548533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.548545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.548853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.756 [2024-06-10 11:49:03.548865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.756 qpair failed and we were unable to recover it. 00:40:38.756 [2024-06-10 11:49:03.549041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.549053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.549282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.549323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.549661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.549702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.550010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.550049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.550296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.550335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.550549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.550598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.550903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.550943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.551253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.551265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.551364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.551376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.551613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.551654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.551932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.551972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.552229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.552241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.552458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.552498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.552851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.552891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.553106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.553118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.553279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.553318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.553598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.553638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.553919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.553958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.554181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.554221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.554521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.554532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.554813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.554825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.555076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.555089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.555342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.555356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.555565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.555582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.555880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.555892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.556110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.556133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.556365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.556377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.556618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.556631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.556912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.556924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.557207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.557219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.557506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.557546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.557923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.557964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.558204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.558216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.558389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.558428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.558750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.558791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.559027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.559066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.559354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.559393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.559622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.757 [2024-06-10 11:49:03.559663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.757 qpair failed and we were unable to recover it. 00:40:38.757 [2024-06-10 11:49:03.560006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.560040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.560206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.560218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.560377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.560389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.560586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.560627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.560795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.560834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.561124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.561163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.561379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.561391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.561558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.561570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.561821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.561833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.562079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.562091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.562374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.562386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.562558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.562570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.562673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.562685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.562940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.562966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.563311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.563350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.563610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.563651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.563945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.563985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.564259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.564299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.564526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.564566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.564815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.564854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.565092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.565104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.565319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.565331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.565611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.565624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.565847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.565886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.566125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.566171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.566456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.566496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.566860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.566901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.567187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.567227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.567515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.567554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.567843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.567882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.568167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.568207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.568422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.568461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.568742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.568782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.569009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.569049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.569346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.569385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.569668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.569709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.569933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.569972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.570194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.570206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.758 qpair failed and we were unable to recover it. 00:40:38.758 [2024-06-10 11:49:03.570442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.758 [2024-06-10 11:49:03.570482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.570813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.570854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.571089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.571129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.571474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.571514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.571831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.571872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.572114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.572153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.572477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.572516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.572803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.572843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.573082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.573122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.573442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.573482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.573795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.573807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.573969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.573981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.574202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.574214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.574480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.574521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.574842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.574883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.575126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.575166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.575462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.575501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.575783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.575795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.576022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.576055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.576331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.576371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.576672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.576712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.576949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.576988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.577280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.577319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.577690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.577730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.578038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.578078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.578382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.578421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.578720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.578760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.578979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.579019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.579256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.579296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.579596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.579624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.579932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.579944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.580166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.580178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.580405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.580417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.580593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.580605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.580930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.580970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.581188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.581200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.581443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.581482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.581714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.581754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.759 qpair failed and we were unable to recover it. 00:40:38.759 [2024-06-10 11:49:03.582070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.759 [2024-06-10 11:49:03.582122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.582358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.582369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.582608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.582620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.582791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.582803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.582978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.582990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.583229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.583269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.583597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.583637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.584032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.584072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.584353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.584365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.584523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.584535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.584795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.584836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.585125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.585164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.585349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.585362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.585653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.585693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.585923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.585963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.586222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.586268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.586630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.586671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.586960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.587000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.587338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.587350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.587584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.587597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.587756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.587768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.587981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.587993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.588235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.588274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.588556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.588607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.588821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.588861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.589152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.589192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.589420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.589459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.589758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.589798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.590173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.590213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.590506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.590547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.590911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.760 [2024-06-10 11:49:03.590950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.760 qpair failed and we were unable to recover it. 00:40:38.760 [2024-06-10 11:49:03.591224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.591236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.591411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.591424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.591682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.591721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.592027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.592067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.592346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.592369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.592622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.592634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.592787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.592799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.592976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.592988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.593287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.593327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.593618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.593660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.593925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.593965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.594224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.594236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.594415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.594427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.594726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.594766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.594983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.595023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.595249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.595289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.595492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.595504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.595745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.595785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.596130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.596170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.596437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.596449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.596681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.596693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.596976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.596988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.597213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.597253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.597586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.597627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.597871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.597916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.598201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.598240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.598537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.598586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.598947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.598986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.599267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.599279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.599431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.599443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.599661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.599701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.600035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.600075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.600323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.600362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.600642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.600682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.601043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.601083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.601412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.601452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.601731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.601772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.602087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.602126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.761 [2024-06-10 11:49:03.602373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.761 [2024-06-10 11:49:03.602413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.761 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.602688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.602729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.603100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.603140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.603535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.603586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.603884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.603923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.604201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.604241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.604570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.604619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.604934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.604974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.605186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.605226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.605518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.605557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.605862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.605902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.606269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.606309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.606621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.606662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.606892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.606932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.607227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.607267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.607606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.607646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.607944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.607983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.608343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.608382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.608754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.608794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.609042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.609081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.609306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.609318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.609548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.609594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.609818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.609859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.610204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.610244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.610475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.610515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.610891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.610932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.611307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.611352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.611648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.611688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.612029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.612069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.612384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.612396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.612655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.612667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.612980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.613019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.613307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.613356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.613521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.613533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.613767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.613779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.614040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.614052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.614209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.614221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.614390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.614430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.614779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.762 [2024-06-10 11:49:03.614819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.762 qpair failed and we were unable to recover it. 00:40:38.762 [2024-06-10 11:49:03.615108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.615151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.615458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.615490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.615801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.615842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.616069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.616109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.616408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.616447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.616694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.616734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.617082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.617122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.617400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.617440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.617723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.617735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.617971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.618011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.618299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.618338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.618649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.618689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.618993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.619033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.619329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.619369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.619652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.619692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.619921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.619960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.620244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.620284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.620573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.620621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.620967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.621007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.621287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.621327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.621564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.621624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.621907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.621947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.622294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.622334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.622574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.622590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.622752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.622765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.623031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.623070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.623371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.623411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.623730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.623744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.623899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.623910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.624148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.624160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.624377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.624416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.624786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.624826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.625122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.625162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.625451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.625490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.625699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.625711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.625802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.625814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.625975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.625987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.626224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.626263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.763 qpair failed and we were unable to recover it. 00:40:38.763 [2024-06-10 11:49:03.626559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.763 [2024-06-10 11:49:03.626620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.626862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.626874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.627161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.627172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.627335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.627375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.627606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.627646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.627895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.627935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.628223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.628256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.628492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.628504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.628728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.628740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.628966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.628978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.629200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.629211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.629392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.629404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.629642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.629683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.629961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.630001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.630348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.630388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.630678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.630718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.631087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.631126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.631437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.631476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.631711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.631723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.631894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.631906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.632152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.632192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.632492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.632531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.632824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.632864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.633136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.633175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.633367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.633378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.633545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.633557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.633737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.633778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.634065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.634105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.634389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.634428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.634713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.634760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.634992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.635031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.635317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.635356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.635709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.635749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.635966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.636006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.636291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.636330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.636635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.636675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.637040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.637079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.637364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.637404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.764 [2024-06-10 11:49:03.637646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.764 [2024-06-10 11:49:03.637688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.764 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.637783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.637795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.638053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.638092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.638390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.638430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.638765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.638778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.639005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.639018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.639236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.639247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.639475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.639487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.639812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.639853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.640102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.640142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.640447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.640459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.640743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.640755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.641049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.641089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.641315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.641355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.641640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.641680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.641905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.641944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.642333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.642372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.642583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.642623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.642976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.643016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.643231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.643271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.643562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.643612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.643955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.643995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.644340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.644379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.644682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.644722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.645012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.645051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.645394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.645433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.645701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.645713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.645929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.645941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.646112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.646124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.646310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.646350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.646628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.646668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.646960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.647006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.647286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.647326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.765 [2024-06-10 11:49:03.647597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.765 [2024-06-10 11:49:03.647609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.765 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.647818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.647829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.648120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.648160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.648507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.648546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.648793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.648833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.649069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.649109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.649392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.649431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.649648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.649660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.649817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.649829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.649988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.650000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.650311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.650351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.650567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.650613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.650843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.650884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.651128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.651168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.651536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.651584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.651796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.651836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.652204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.652243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.652542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.652593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.652806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.652846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.653060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.653100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.653458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.653497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.653800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.653841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.654053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.654093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.654387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.654427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.654704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.654740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.654959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.654971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.655137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.655149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.655305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.655317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.655467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.655479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.655714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.655726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.655894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.655906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.656048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.656060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.656332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.656371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.656718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.656758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.656985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.657025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.657311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.657351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.657620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.657632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.657938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.657950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.766 [2024-06-10 11:49:03.658281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.766 [2024-06-10 11:49:03.658327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.766 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.658553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.658605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.658979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.659018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.659240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.659279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.659588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.659629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.659837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.659849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.660080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.660119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.660412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.660452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.660739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.660780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.661065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.661104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.661331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.661370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.661622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.661663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.662027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.662068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.662294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.662334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.662552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.662565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.662814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.662854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.663136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.663184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.663502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.663542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.663912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.663951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.664181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.664221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.664507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.664548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.664833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.664846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.665152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.665164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.665415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.665455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.665742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.665784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.666081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.666121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.666492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.666531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.666793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.666805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.667084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.667097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.667387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.667427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.667663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.667703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.667920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.667932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.668162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.668203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.668490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.668529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.668815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.668828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.668973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.668985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.669211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.669233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.669466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.669478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.669696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.767 [2024-06-10 11:49:03.669708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.767 qpair failed and we were unable to recover it. 00:40:38.767 [2024-06-10 11:49:03.669872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.669884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.669983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.669998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.670238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.670250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.670479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.670518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.670764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.670805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.671087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.671126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.671464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.671504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.671733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.671745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.671918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.671957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.672128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.672168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.672388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.672438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.672719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.672732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.672892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.672925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.673138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.673177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.673419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.673459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.673716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.673728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.674009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.674021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.674329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.674342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.674553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.674565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.674794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.674806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.675019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.675031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.675200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.675240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.675561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.675611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.675999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.676020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.676269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.676295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.676538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.676554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.676869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.676882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.677095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.677107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.677327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.677339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.677566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.677584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.677798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.677810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.678030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.678042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.678321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.678333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.678498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.678510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.678666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.678679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.678962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.678974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.679150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.679162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.679446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.768 [2024-06-10 11:49:03.679459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.768 qpair failed and we were unable to recover it. 00:40:38.768 [2024-06-10 11:49:03.679625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.679637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.679875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.679887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.680050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.680061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.680314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.680329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.680611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.680624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.680858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.680870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.681034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.681047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.681205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.681217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.681371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.681383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.681551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.681563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.681715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.681727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.681962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.681977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.682217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.682232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.682469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.682483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.682679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.682692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.682910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.682922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.683078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.683090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.683246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.683258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.683401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.683413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.683691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.683717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.683892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.683906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.684160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.684174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.684473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.684487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.684688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.684701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.684804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.684816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.684987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.684999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.685234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.685247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.685476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.685489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.685734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.685746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.685976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.685988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.686099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.686112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.686265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.686277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.686426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.769 [2024-06-10 11:49:03.686439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.769 qpair failed and we were unable to recover it. 00:40:38.769 [2024-06-10 11:49:03.686606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.686619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.686809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.686821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.687050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.687062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.687281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.687293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.687440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.687452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.687540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.687552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.687842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.687854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.688114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.688127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.688345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.688357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.688522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.688534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.688632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.688647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.688952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.688965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.689181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.689193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.689429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.689441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.689668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.689681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.689984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.689996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.690227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.690239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.690531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.690543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.690706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.690719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.690886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.690899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.691116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.691129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.691457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.691469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.691634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.691646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.691866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.691878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.692053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.692065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.692326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.692339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.692527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.692539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.692755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.692768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.692949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.692962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.693179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.693192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.693478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.693491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.693600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.693612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.693838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.693851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.694078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.694092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.694355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.694370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.694542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.694565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.694801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.694818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.694996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.695008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.770 qpair failed and we were unable to recover it. 00:40:38.770 [2024-06-10 11:49:03.695245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.770 [2024-06-10 11:49:03.695258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.695494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.695506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.695750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.695762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.696110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.696122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.696368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.696380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.696609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.696621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.696779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.696791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.697028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.697041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.697153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.697171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.697334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.697346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.697573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.697651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.697935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.697975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.698262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.698308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.698592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.698633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.698937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.698977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.699202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.699241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.699531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.699570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.699971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.700011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.700233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.700273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.700571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.700625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.700916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.700957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.701258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.701298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.701597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.701637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.701867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.701907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.702128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.702168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.702480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.702520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.702766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.702807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.703099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.703138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.703480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.703520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.703806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.703848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.704155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.704195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.704473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.704513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.704773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.704786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.705105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.705145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.705433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.705472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.705719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.705768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.706064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.706105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.706313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.706352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.706641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.771 [2024-06-10 11:49:03.706681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.771 qpair failed and we were unable to recover it. 00:40:38.771 [2024-06-10 11:49:03.707062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.707103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.707382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.707421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.707628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.707641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.707808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.707820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.708051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.708063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.708355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.708396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.708623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.708665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.708902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.708942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.709306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.709346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.709558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.709610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.709770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.709810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.710017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.710057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.710357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.710409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.710594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.710628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.710789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.710801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.710965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.711004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.711223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.711262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.711545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.711597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.711883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.711941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.712248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.712295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.712601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.712643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.712807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.712819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.713058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.713098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.713377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.713418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.713708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.713749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.714139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.714179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.714411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.714450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.714688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.714701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.714869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.714909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.715212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.715251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.715523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.715563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.715823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.715864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.716092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.716131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.716371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.716410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.716631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.716643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.716864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.716903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.717183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.717222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.717522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.772 [2024-06-10 11:49:03.717561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.772 qpair failed and we were unable to recover it. 00:40:38.772 [2024-06-10 11:49:03.717798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.717810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.718036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.718048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.718262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.718274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.718492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.718504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.718741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.718753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.718986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.719025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.719409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.719448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.719658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.719671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.719826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.719865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.720181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.720220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.720490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.720502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.720720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.720732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.720976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.721016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.721400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.721439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.721791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.721804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.722101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.722115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.722344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.722356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.722618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.722630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.722791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.722803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.723035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.723074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.723440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.723485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.723719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.723732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.723894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.723906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.724138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.724150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.724382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.724393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.724628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.724640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.724865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.724877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.725052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.725064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.725245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.725285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.725504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.725544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.725926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.725966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.726200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.726239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.726593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.726633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.726934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.726974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.727314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.727353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.727742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.727754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.727988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.728027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.773 [2024-06-10 11:49:03.728391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.773 [2024-06-10 11:49:03.728431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.773 qpair failed and we were unable to recover it. 00:40:38.774 [2024-06-10 11:49:03.728650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.774 [2024-06-10 11:49:03.728663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.774 qpair failed and we were unable to recover it. 00:40:38.774 [2024-06-10 11:49:03.728901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.774 [2024-06-10 11:49:03.728942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.774 qpair failed and we were unable to recover it. 00:40:38.774 [2024-06-10 11:49:03.729247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.774 [2024-06-10 11:49:03.729287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.774 qpair failed and we were unable to recover it. 00:40:38.774 [2024-06-10 11:49:03.729493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.774 [2024-06-10 11:49:03.729533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.774 qpair failed and we were unable to recover it. 00:40:38.774 [2024-06-10 11:49:03.729759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1215b50 is same with the state(5) to be set 00:40:38.774 [2024-06-10 11:49:03.730220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.774 [2024-06-10 11:49:03.730295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4868000b90 with addr=10.0.0.2, port=4420 00:40:38.774 qpair failed and we were unable to recover it. 00:40:38.774 [2024-06-10 11:49:03.730676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.774 [2024-06-10 11:49:03.730721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4868000b90 with addr=10.0.0.2, port=4420 00:40:38.774 qpair failed and we were unable to recover it. 00:40:38.774 [2024-06-10 11:49:03.730947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.774 [2024-06-10 11:49:03.730988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4868000b90 with addr=10.0.0.2, port=4420 00:40:38.774 qpair failed and we were unable to recover it. 00:40:38.774 [2024-06-10 11:49:03.731109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.774 [2024-06-10 11:49:03.731123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.774 qpair failed and we were unable to recover it. 00:40:38.774 [2024-06-10 11:49:03.731349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.774 [2024-06-10 11:49:03.731389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.774 qpair failed and we were unable to recover it. 00:40:38.774 [2024-06-10 11:49:03.731611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.774 [2024-06-10 11:49:03.731652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.774 qpair failed and we were unable to recover it. 00:40:38.774 [2024-06-10 11:49:03.731900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.774 [2024-06-10 11:49:03.731940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.732236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.732276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.732553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.732606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.732951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.732991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.733223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.733263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.733588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.733629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.733974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.734014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.734363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.734404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.734771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.734812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.735126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.735138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.735449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.735490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.735804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.735844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.736175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.736215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.736512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.736552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.736890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.736930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.737285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.737325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.737629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.737641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.737965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.737977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.738266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.738306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.738650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.775 [2024-06-10 11:49:03.738691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.775 qpair failed and we were unable to recover it. 00:40:38.775 [2024-06-10 11:49:03.738962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.738975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.739205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.739217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.739431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.739443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.739673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.739686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.739985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.740025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.740331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.740371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.740736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.740748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.741049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.741061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.741220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.741232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.741481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.741493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.741678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.741690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.742003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.742042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.742268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.742308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.742652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.742693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.743091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.743131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.743507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.743546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.743854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.743895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.744174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.744214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.744487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.744523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.744805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.744818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.745046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.745058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.745390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.745429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.745793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.745834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.746127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.746179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.746460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.746500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.746832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.746873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.747220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.747259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.747683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.747763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.748185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.748262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4868000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.748591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.748635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4868000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.748929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.748969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4868000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.749245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.749284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4868000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.749661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.749701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4868000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.749920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.749962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.750267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.750307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.750678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.750718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.751063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.751102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.751419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.751458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.776 [2024-06-10 11:49:03.751803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.776 [2024-06-10 11:49:03.751844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.776 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.752151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.752191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.752487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.752533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.752911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.752952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.753248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.753288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.753658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.753699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.754061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.754102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.754469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.754509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.754892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.754932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.755246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.755286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.755617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.755657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.755944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.755985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.756222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.756261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.756627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.756668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.757002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.757043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.757323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.757363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.757715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.757755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.758124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.758164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.758507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.758548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.758923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.758935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.759167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.759179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.759430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.759442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.759726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.759738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.760043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.760083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.760371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.760411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.760778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.760818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.761121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.761161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.761504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.761544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.761867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.761907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.762267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.762307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.762650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.762690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.762924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.762936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.763251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.763290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.763637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.763676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.763922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.763935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.764279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.764319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.764687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.764728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.764942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.764954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.765295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.777 [2024-06-10 11:49:03.765335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.777 qpair failed and we were unable to recover it. 00:40:38.777 [2024-06-10 11:49:03.765619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.765659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.765873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.765913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.766286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.766326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.766574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.766629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.766930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.766969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.767311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.767351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.767715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.767755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.768141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.768180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.768532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.768572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.768929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.768941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.769134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.769146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.769376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.769415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.769760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.769811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.770150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.770191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.770489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.770529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.770897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.770937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.771292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.771331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.771584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.771625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.771971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.772011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.772290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.772329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.772672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.772713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.773080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.773119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.773506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.773546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.773852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.773892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.774290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.774330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.774631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.774672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.774965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.775004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.775302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.775342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.775572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.775620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.775987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.776026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.776435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.776516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.776907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.776952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.777326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.777366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.777665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.777706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.778023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.778062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.778354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.778394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.778692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.778732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.779024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.778 [2024-06-10 11:49:03.779063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.778 qpair failed and we were unable to recover it. 00:40:38.778 [2024-06-10 11:49:03.779408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.779448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.779735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.779754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.780054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.780074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.780304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.780323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.780624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.780644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.780975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.781024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.781414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.781453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.781834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.781854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.782158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.782177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.782522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.782540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.782796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.782836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.783213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.783253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.783617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.783657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.783883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.783922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.784219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.784259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.784537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.784586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.784932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.784972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.785284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.785323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.785689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.785729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.786082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.786122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.786501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.786541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.786850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.786889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.787162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.787181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.787452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.787471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.787821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.787840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.788167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.788206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.788571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.788617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.788907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.788947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.789291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.789330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.789609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.789649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.790021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.790060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.790350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.790390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.790765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.790806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.791087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.779 [2024-06-10 11:49:03.791105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.779 qpair failed and we were unable to recover it. 00:40:38.779 [2024-06-10 11:49:03.791335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.791354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.791675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.791694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.792023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.792062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.792430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.792470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.792690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.792709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.792891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.792930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.793294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.793333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.793683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.793724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.794063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.794102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.794513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.794552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.794859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.794878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.795125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.795147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.795490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.795568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.795963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.796006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.796294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.796335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.796627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.796668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.797018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.797058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.797434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.797473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.797787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.797800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.798090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.798103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.798335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.798347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.798610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.798651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.799015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.799054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.799277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.799317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.799659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.799699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.799986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.799999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.800220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.800231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.800560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.800611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.800989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.801029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.801321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.801361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.801733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.801773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.802147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.802187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.802467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.802507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.802770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.802782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.803075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.803115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.803463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.803504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.803796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.803837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.780 qpair failed and we were unable to recover it. 00:40:38.780 [2024-06-10 11:49:03.804137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.780 [2024-06-10 11:49:03.804177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.804483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.804523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.804822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.804862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.805159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.805171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.805432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.805444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.805672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.805685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.805958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.805970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.806281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.806320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.806606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.806646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.806927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.806967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.807237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.807277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.807646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.807687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.807986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.808025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.808313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.808353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.808734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.808746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.809032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.809059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.809355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.809395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.809667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.809679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.809915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.809928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.810165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.810177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.810451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.810491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.810796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.810837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.811207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.811247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.811545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.811594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.811911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.811951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.812237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.812276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.812587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.812638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.812872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.812884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.813106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.813118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.813344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.813356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.813590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.813603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.813830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.813842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.814143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.814182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.814427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.814467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.814694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.814728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.814962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.814975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.815200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.815213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.815443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.815456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.781 qpair failed and we were unable to recover it. 00:40:38.781 [2024-06-10 11:49:03.815733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.781 [2024-06-10 11:49:03.815746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.815980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.815992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.816221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.816233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.816407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.816422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.816642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.816654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.816889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.816928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.817231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.817270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.817649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.817690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.817898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.817938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.818236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.818294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.818665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.818705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.819072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.819111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.819459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.819498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.819662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.819704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.819925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.819964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.820257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.820297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.820535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.820584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.820882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.820923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.821124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.821164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.821539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.821605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.821884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.821924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.822180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.822192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.822442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.822455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.822767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.822809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.823173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.823214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.823564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.823612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.823988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.824028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.824343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.824383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.824671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.824711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.825007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.825047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.825396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.825437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.825806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.825853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.826090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.826102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.826289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.826301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.826584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.826596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.826903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.826943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.827132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.827172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.827467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.827506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.827845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.782 [2024-06-10 11:49:03.827885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.782 qpair failed and we were unable to recover it. 00:40:38.782 [2024-06-10 11:49:03.828178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.828218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.828594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.828635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.828984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.829024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.829317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.829329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.829560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.829590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.829758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.829770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.829931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.829970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.830206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.830247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.830541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.830591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.830967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.831008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.831288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.831300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.831618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.831659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.832003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.832044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.832362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.832402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.832699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.832739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.833015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.833055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.833405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.833444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.833811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.833851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.834233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.834274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.834646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.834685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.835039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.835079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.835424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.835464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.835710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.835751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.836031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.836070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.836466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.836505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.836812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.836852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.837202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.837242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.837551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.837619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.838018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.838058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.838270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.838309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.838591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.838632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.838948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.838988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.839258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.839298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.839641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.839682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.840052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.840092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.840373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.840413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.840622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.840663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.840963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.783 [2024-06-10 11:49:03.841004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.783 qpair failed and we were unable to recover it. 00:40:38.783 [2024-06-10 11:49:03.841291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.841303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.841519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.841531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.841633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.841646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.841959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.841999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.842274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.842313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.842666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.842707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.843015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.843029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.843247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.843259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.843424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.843437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.843669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.843682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.843917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.843929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.844206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.844219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.844385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.844397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.844703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.844744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.845091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.845103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.845330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.845342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:38.784 [2024-06-10 11:49:03.845500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:38.784 [2024-06-10 11:49:03.845512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:38.784 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.845744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.845757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.845846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.845858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.845950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.845963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.846060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.846073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.846234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.846246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.846549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.846562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.846741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.846754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.846965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.846978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.847192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.847204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.847369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.847382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.847540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.847553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.847786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.847799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.847978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.847990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.848205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.848218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.848509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.848522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.848808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.848821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.849041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.849054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.849283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.849296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.849421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.849433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.849738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.849751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.849980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.849993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.850225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.850237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.850395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.850407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.850696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.850736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.851138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.851177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.851534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.851574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.851811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.851850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.852200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.852240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.852595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.852636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.852955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.852970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.057 [2024-06-10 11:49:03.853214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.057 [2024-06-10 11:49:03.853253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.057 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.853487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.853526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.853802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.853843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.854153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.854192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.854481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.854521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.854875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.854915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.855300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.855340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.855617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.855657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.855879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.855913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.856217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.856229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.856451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.856489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.856833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.856872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.857183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.857237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.857594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.857634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.857994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.858033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.858255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.858267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.858591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.858630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.858798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.858838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.859077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.859116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.859444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.859483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.859693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.859733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.860016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.860056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.860356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.860395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.860682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.860721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.861094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.861133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.861499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.861538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.861931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.861982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.862302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.862342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.862710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.862750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.863042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.863053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.863290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.863302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.863534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.863546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.863819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.863831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.864143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.864155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.864440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.864479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.864800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.864841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.865125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.865164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.865441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.865480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.058 [2024-06-10 11:49:03.865758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.058 [2024-06-10 11:49:03.865798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.058 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.866077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.866128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.866491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.866531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.866711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.866751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.866982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.867021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.867296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.867308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.867633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.867673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.867999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.868038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.868403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.868442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.868846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.868887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.869181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.869221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.869496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.869535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.869920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.869959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.870241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.870281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.870530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.870569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.870836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.870877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.871280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.871320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.871631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.871670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.872013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.872053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.872421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.872460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.872770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.872810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.873172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.873185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.873478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.873490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.873800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.873840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.874081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.874121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.874425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.874464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.874831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.874882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.875127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.875139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.875454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.875493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.875858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.875898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.876186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.876198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.876426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.876438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.876740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.876752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.876966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.876978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.877309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.877349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.877724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.877764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.877992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.878004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.878343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.878383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.059 [2024-06-10 11:49:03.878741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.059 [2024-06-10 11:49:03.878781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.059 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.879168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.879207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.879568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.879614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.879956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.879970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.880327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.880366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.880682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.880723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.881035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.881074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.881378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.881418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.881663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.881703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.881999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.882040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.882415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.882454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.882826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.882866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.883102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.883114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.883425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.883464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.883774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.883815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.884065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.884104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.884402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.884442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.884829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.884870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.885239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.885278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.885604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.885644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.885941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.885980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.886345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.886384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.886728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.886768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.886963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.886975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.887227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.887266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.887544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.887593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.887892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.887904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.888195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.888234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.888600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.888641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.888942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.888981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.889289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.889339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.889688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.889741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.890142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.890155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.890451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.890463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.890692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.890704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.890959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.890972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.891140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.891153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.891388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.891400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.060 [2024-06-10 11:49:03.891593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.060 [2024-06-10 11:49:03.891621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.060 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.891858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.891871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.892183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.892222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.892458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.892497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.892822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.892863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.893152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.893199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.893623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.893665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.893964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.894003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.894374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.894413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.894809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.894859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.894973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.894985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.895241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.895280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.895649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.895689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.895925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.895965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.896301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.896340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.896636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.896677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.896977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.897016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.897387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.897427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.897805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.897846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.898133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.898173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.898450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.898489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.898835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.898876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.899247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.899286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.899672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.899712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.900076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.900088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.900352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.900387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.900696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.900736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.900969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.901009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.901238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.901277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.901645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.901685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.901977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.901989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.061 [2024-06-10 11:49:03.902223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.061 [2024-06-10 11:49:03.902235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.061 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.902546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.902595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.902971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.903011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.903311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.903351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.903595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.903636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.904012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.904052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.904398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.904437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.904806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.904855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.905090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.905112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.905336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.905348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.905614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.905626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.905848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.905860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.906164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.906176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.906410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.906422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.906670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.906685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.906929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.906942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.907159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.907198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.907441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.907480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.907754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.907794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.908162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.908202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.908502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.908542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.908879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.908918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.909301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.909340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.909706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.909746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.910024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.910063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.910351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.910390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.910684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.910724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.911085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.911125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.911468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.911480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.911708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.911721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.911942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.911954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.912099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.912111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.912418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.912457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.912685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.912724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.913091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.913103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.913259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.913271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.913451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.913490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.913639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.913679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.914025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.062 [2024-06-10 11:49:03.914065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.062 qpair failed and we were unable to recover it. 00:40:39.062 [2024-06-10 11:49:03.914280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.914319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.914662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.914702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.915085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.915126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.915483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.915523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.915880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.915920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.916131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.916144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.916320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.916332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.916621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.916661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.917037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.917077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.917388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.917400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.917721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.917762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.918064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.918104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.918348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.918388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.918692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.918732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.919080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.919120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.919488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.919533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.919851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.919891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.920188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.920227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.920596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.920636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.920979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.921018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.921370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.921410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.921755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.921795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.922137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.922177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.922546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.922593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.922940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.922979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.923353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.923392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.923606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.923647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.923999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.924038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.924361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.924400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.924735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.924775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.925149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.925189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.925368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.925407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.925717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.925758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.926122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.926162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.926463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.926503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.926863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.926904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.927246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.927286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.927574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.927623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.063 qpair failed and we were unable to recover it. 00:40:39.063 [2024-06-10 11:49:03.927899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.063 [2024-06-10 11:49:03.927938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.928232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.928271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.928566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.928582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.928817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.928830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.929090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.929129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.929407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.929446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.929827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.929868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.930237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.930276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.930540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.930552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.930789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.930801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.930967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.930979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.931277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.931316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.931687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.931727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.932093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.932132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.932486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.932525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.932835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.932875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.933245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.933284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.933652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.933703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.934023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.934062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.934466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.934505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.934868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.934908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.935268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.935308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.935652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.935692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.935898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.935937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.936242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.936282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.936554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.936566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.936795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.936808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.936999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.937039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.937333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.937372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.937653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.937694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.938003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.938043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.938356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.938367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.938623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.938635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.938855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.938867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.939176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.939216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.939604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.939643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.940008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.940048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.940334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.940346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.064 [2024-06-10 11:49:03.940629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.064 [2024-06-10 11:49:03.940642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.064 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.940960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.940972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.941127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.941139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.941365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.941377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.941635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.941648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.941863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.941875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.942041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.942053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.942380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.942420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.942792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.942832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.943134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.943174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.943544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.943593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.943959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.943999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.944343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.944382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.944769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.944810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.945168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.945220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.945590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.945630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.945982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.946021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.946424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.946464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.946828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.946867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.947175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.947188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.947443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.947454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.947768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.947808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.948131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.948170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.948515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.948547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.948947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.948989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.949369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.949408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.949793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.949834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.950138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.950151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.950438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.950450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.950741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.950753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.951066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.951106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.951340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.951379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.951742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.951782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.952084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.952124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.952441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.952485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.952869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.952919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.953153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.953165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.953495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.953535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.954004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.065 [2024-06-10 11:49:03.954081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.065 qpair failed and we were unable to recover it. 00:40:39.065 [2024-06-10 11:49:03.954464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.954508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.954848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.954890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.955268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.955308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.955711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.955752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.956100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.956140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.956528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.956568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.956921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.956960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.957347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.957404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.957689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.957729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.958056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.958096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.958401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.958440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.958829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.958869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.959161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.959200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.959574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.959628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.960010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.960049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.960357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.960396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.960766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.960806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.961049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.961088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.961428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.961468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.961770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.961809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.962024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.962064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.962409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.962449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.962823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.962864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.963238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.963278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.963645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.963684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.963936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.963975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.964307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.964346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.964720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.964759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.965076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.965115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.965502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.965542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.965898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.965938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.966287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.966327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.066 [2024-06-10 11:49:03.966672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.066 [2024-06-10 11:49:03.966712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.066 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.967005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.967045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.967345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.967391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.967687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.967727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.968086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.968126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.968437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.968477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.968845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.968885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.969277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.969316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.969628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.969670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.969986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.970015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.970191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.970204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.970611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.970653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.970945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.970985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.971281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.971320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.971655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.971696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.971991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.972031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.972375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.972415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.972730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.972770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.972999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.973039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.973387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.973427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.973819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.973859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.974153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.974200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.974507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.974519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.974827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.974867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.975163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.975202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.975587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.975628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.976002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.976042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.976393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.976432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.976749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.976789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.977159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.977205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.977433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.977473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.977771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.977811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.978189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.978231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.978534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.978546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.978881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.978921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.979293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.979333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.979677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.979718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.980110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.980150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.980440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.067 [2024-06-10 11:49:03.980452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.067 qpair failed and we were unable to recover it. 00:40:39.067 [2024-06-10 11:49:03.980664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.980676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.980991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.981030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.981368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.981408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.981805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.981846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.982199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.982239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.982623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.982664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.982962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.983002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.983360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.983400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.983769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.983809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.984159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.984198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.986883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.986925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.987226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.987266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.987627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.987667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.988081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.988122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.988494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.988535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.988929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.988972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.989346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.989386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.989780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.989822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.990185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.990228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.990586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.990627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.990971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.991012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.991393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.991433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.991777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.991817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.992183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.992223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.992492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.992504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.992806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.992820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.993202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.993241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.993607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.993648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.994017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.994056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.994383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.994395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.994677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.994691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.995006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.995018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.995328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.995369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.995730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.995788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.996155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.996195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.996510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.996522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.996737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.996749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.997073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.068 [2024-06-10 11:49:03.997085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.068 qpair failed and we were unable to recover it. 00:40:39.068 [2024-06-10 11:49:03.997412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:03.997452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:03.997813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:03.997853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:03.998231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:03.998271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:03.998512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:03.998552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:03.998893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:03.998933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:03.999293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:03.999333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:03.999711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:03.999752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.000117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.000157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.000525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.000565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.000891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.000931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.001294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.001334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.001609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.001640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.002005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.002045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.002427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.002467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.002822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.002863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.003182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.003222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.003598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.003639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.003875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.003915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.004211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.004251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.004564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.004612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.004980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.005020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.005377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.005417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.005771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.005814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.006118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.006157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.006533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.006572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.006926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.006966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.007297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.007337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.007691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.007732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.008108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.008147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.008506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.008545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.008854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.008894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.009209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.009254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.009496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.009509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.009774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.009786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.010112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.010152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.010440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.010480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.010849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.010890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.011266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.011306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.069 [2024-06-10 11:49:04.011675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.069 [2024-06-10 11:49:04.011716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.069 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.012073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.012113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.012490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.012530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.012884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.012897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.013205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.013245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.013593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.013633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.013998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.014037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.014389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.014401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.014658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.014670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.014971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.015010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.015372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.015412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.015778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.015790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.016090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.016129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.016425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.016465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.016760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.016800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.017171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.017211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.017437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.017476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.017836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.017877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.018190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.018229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.018598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.018640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.019010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.019050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.019351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.019391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.019769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.019810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.020191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.020231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.020446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.020458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.020702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.020743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.021108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.021147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.021516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.021556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.021846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.021886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.022249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.022289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.022662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.022703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.023074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.023114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.023463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.070 [2024-06-10 11:49:04.023503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.070 qpair failed and we were unable to recover it. 00:40:39.070 [2024-06-10 11:49:04.023881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.023893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.024209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.024256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.024627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.024668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.025036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.025076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.025390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.025430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.025697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.025709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.025954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.025966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.026296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.026308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.026593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.026634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.027012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.027052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.027423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.027463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.027811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.027852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.028238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.028279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.028630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.028643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.028948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.028987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.029341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.029381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.029743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.029784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.030150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.030189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.030561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.030626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.030930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.030971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.031370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.031409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.031772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.031784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.032125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.032165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.032411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.032451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.032793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.032834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.033142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.033182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.033551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.033617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.033899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.033911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.034220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.034260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.034611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.034652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.035022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.035061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.035435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.035476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.035839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.035851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.036086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.036097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.036423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.036463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.036833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.036873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.037244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.037284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.037652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.071 [2024-06-10 11:49:04.037663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.071 qpair failed and we were unable to recover it. 00:40:39.071 [2024-06-10 11:49:04.037934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.037975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.038277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.038316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.038611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.038652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.039029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.039074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.039439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.039479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.039846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.039887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.040212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.040252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.040599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.040640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.041031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.041071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.041442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.041481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.041854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.041895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.042197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.042237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.042625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.042666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.042947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.042987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.043340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.043380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.043753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.043793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.044142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.044183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.044541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.044590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.044964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.045003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.045374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.045413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.045773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.045814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.046132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.046181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.046485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.046525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.046910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.046951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.047319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.047359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.047709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.047750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.048138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.048178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.048464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.048504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.048890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.048931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.049310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.049350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.049719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.049731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.050030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.050069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.050437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.050476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.050852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.050893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.051250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.051290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.051657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.051698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.052101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.052141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.052441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.052480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.072 qpair failed and we were unable to recover it. 00:40:39.072 [2024-06-10 11:49:04.052857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.072 [2024-06-10 11:49:04.052898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.053172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.053212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.053532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.053572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.053952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.053992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.054363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.054403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.054773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.054819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.055179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.055219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.055597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.055638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.055944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.055984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.056357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.056396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.056768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.056808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.057180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.057220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.057522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.057562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.057892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.057932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.058230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.058270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.058646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.058687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.059063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.059103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.059484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.059524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.059841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.059882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 4176604 Killed "${NVMF_APP[@]}" "$@" 00:40:39.073 [2024-06-10 11:49:04.060234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.060274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.060653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.060665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:40:39.073 [2024-06-10 11:49:04.060978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.060992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.061277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.061290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:40:39.073 [2024-06-10 11:49:04.061590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.061603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:39.073 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:39.073 [2024-06-10 11:49:04.061942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.061955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.062168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.062181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:39.073 [2024-06-10 11:49:04.062507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.062520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.062779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.062802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.063033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.063046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.063291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.063304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.063648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.063661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.063899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.063911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.064143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.064155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.064420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.064432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.064671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.064684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.065003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.065016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.073 qpair failed and we were unable to recover it. 00:40:39.073 [2024-06-10 11:49:04.065323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.073 [2024-06-10 11:49:04.065363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.065710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.065753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.066121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.066162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.066518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.066559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.066853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.066866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.067181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.067220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.067541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.067590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.067974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.067987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.068241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.068253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.068585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.068598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.068916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.068928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.069160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.069172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.069479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.069491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.069736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.069749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.070058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.070071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.070316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.070329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.070641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.070682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.071050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.071090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4177441 00:40:39.074 [2024-06-10 11:49:04.071482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.071523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b9 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4177441 00:40:39.074 0 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:40:39.074 [2024-06-10 11:49:04.071919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 4177441 ']' 00:40:39.074 [2024-06-10 11:49:04.071961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:39.074 [2024-06-10 11:49:04.072267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.072310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:39.074 [2024-06-10 11:49:04.072670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.072712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:39.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:39.074 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:39.074 [2024-06-10 11:49:04.073089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.073130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:39.074 [2024-06-10 11:49:04.073454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.073495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.073846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.073889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.074214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.074254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.074626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.074640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.074968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.075008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.075374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.075414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.075783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.075800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.076020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.076034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.076315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.076328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.076689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.076730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.077039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.077079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.074 [2024-06-10 11:49:04.077463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.074 [2024-06-10 11:49:04.077504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.074 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.077886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.077927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.078284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.078324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.078712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.078754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.079057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.079096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.079450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.079490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.079878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.079919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.080244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.080285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.080641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.080687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.081008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.081022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.081260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.081284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.081466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.081508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.081813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.081854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.082228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.082268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.082572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.082626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.082946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.082973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.083281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.083321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.083655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.083696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.084046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.084086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.084497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.084537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.084918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.084959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.085263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.085304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.085664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.085705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.086081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.086122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.086401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.086441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.086796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.086810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.086971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.086985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.087273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.087285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.087600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.087641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.088020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.088060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.088436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.088475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.088691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.088704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.089034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.089074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.075 [2024-06-10 11:49:04.089497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.075 [2024-06-10 11:49:04.089536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.075 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.089776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.089790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.090033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.090062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.090394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.090434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.090798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.090811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.091036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.091049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.091338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.091351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.091601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.091614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.091907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.091952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.092290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.092331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.092703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.092744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.093025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.093065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.093351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.093390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.093691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.093731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.094040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.094080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.094406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.094452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.094710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.094723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.094978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.095018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.095414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.095454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.095830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.095871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.096197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.096237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.096531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.096570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.096959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.096999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.097234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.097274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.097647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.097688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.097919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.097959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.098195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.098235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.098596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.098636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.098945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.098985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.099280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.099319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.099715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.099751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.099994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.100006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.100334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.100347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.100667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.100707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.101041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.101080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.101402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.101442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.101685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.101725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.102013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.102053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.102366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.076 [2024-06-10 11:49:04.102406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.076 qpair failed and we were unable to recover it. 00:40:39.076 [2024-06-10 11:49:04.102645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.102685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.103073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.103113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.103489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.103528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.103889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.103940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.104250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.104291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.104617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.104658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.104889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.104929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.105288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.105328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.105551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.105603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.105916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.105955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.106254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.106293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.106677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.106717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.107087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.107127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.107483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.107522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.107848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.107888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.108257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.108297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.108559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.108572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.108825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.108838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.109088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.109127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.109425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.109464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.109754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.109795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.110084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.110126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.110520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.110559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.110875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.110915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.111194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.111234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.111514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.111553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.111848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.111889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.112125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.112163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.112535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.112585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.112947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.112987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.113370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.113411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.113789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.113830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.114115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.114155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.114455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.114495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.114855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.114896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.115301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.115340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.115672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.115712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.116079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.077 [2024-06-10 11:49:04.116119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.077 qpair failed and we were unable to recover it. 00:40:39.077 [2024-06-10 11:49:04.116473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.116513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.116893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.116906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.117175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.117213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.117513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.117553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.117938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.117979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.118331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.118387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.118571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.118588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.118760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.118773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.119038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.119077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.119431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.119471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.119661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.119673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.119916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.119955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.120331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.120371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.120744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.120785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.121141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.121181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.121591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.121632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.121926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.121965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.122320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.122360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.122661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.122701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.123027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.123039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.123265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.123277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.123599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.123640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.123944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.123984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.124375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.124411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.124705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.124749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.125118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.125158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.125457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.125497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.125726] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:40:39.078 [2024-06-10 11:49:04.125794] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:39.078 [2024-06-10 11:49:04.125801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.125816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.125986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.125997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.126281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.126293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.126475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.126487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.126792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.126805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.127091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.127104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.127344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.127383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.127704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.127744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.128041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.128053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.128342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.128354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.078 qpair failed and we were unable to recover it. 00:40:39.078 [2024-06-10 11:49:04.128585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.078 [2024-06-10 11:49:04.128598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.128813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.128826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.128995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.129034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.129358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.129398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.129693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.129706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.129871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.129884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.130212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.130253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.130573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.130640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.131045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.131085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.131486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.131526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.131782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.131796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.131968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.132007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.132287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.132327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.132701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.132743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.133141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.133181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.133499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.133539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.133812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.133824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.134073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.134085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.134397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.134437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.134739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.134779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.135178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.135224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.135464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.135504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.135801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.135813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.136062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.136102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.136395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.136436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.136726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.136738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.137035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.137075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.137404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.137443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.137814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.137855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.138163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.138203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.138527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.138567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.138951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.138990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.139278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.139317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.079 [2024-06-10 11:49:04.139618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.079 [2024-06-10 11:49:04.139659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.079 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.139944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.139984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.140296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.140335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.140506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.140545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.140926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.140938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.141260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.141300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.141604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.141644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.142020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.142078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.142406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.142448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.142827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.142878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.143124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.143136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.143426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.143438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.143673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.143686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.144019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.144058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.144269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.144308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.144657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.144670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.144976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.144988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.145136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.145149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.145282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.145295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.145443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.145456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.145741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.145754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.146077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.146117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.146477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.146516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.146900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.146913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.147213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.147226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.080 [2024-06-10 11:49:04.147440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.080 [2024-06-10 11:49:04.147453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.080 qpair failed and we were unable to recover it. 00:40:39.355 [2024-06-10 11:49:04.147767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.355 [2024-06-10 11:49:04.147780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.355 qpair failed and we were unable to recover it. 00:40:39.355 [2024-06-10 11:49:04.148074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.355 [2024-06-10 11:49:04.148089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.355 qpair failed and we were unable to recover it. 00:40:39.355 [2024-06-10 11:49:04.148421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.355 [2024-06-10 11:49:04.148433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.355 qpair failed and we were unable to recover it. 00:40:39.355 [2024-06-10 11:49:04.148682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.355 [2024-06-10 11:49:04.148694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.355 qpair failed and we were unable to recover it. 00:40:39.355 [2024-06-10 11:49:04.148914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.355 [2024-06-10 11:49:04.148926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.355 qpair failed and we were unable to recover it. 00:40:39.355 [2024-06-10 11:49:04.149159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.355 [2024-06-10 11:49:04.149172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.355 qpair failed and we were unable to recover it. 00:40:39.355 [2024-06-10 11:49:04.149338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.355 [2024-06-10 11:49:04.149350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.355 qpair failed and we were unable to recover it. 00:40:39.355 [2024-06-10 11:49:04.149596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.355 [2024-06-10 11:49:04.149612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.355 qpair failed and we were unable to recover it. 00:40:39.355 [2024-06-10 11:49:04.149873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.355 [2024-06-10 11:49:04.149886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.355 qpair failed and we were unable to recover it. 00:40:39.355 [2024-06-10 11:49:04.150064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.355 [2024-06-10 11:49:04.150077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.355 qpair failed and we were unable to recover it. 00:40:39.355 [2024-06-10 11:49:04.150313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.355 [2024-06-10 11:49:04.150325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.355 qpair failed and we were unable to recover it. 00:40:39.355 [2024-06-10 11:49:04.150644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.355 [2024-06-10 11:49:04.150684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.355 qpair failed and we were unable to recover it. 00:40:39.355 [2024-06-10 11:49:04.150911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.355 [2024-06-10 11:49:04.150951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.355 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.151252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.151292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.151657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.151670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.151988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.152028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.152378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.152418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.152701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.152713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.153024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.153063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.153437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.153477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.153733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.153746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.153967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.153980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.154233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.154245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.154358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.154370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.154631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.154672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.155029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.155068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.155363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.155402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.155701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.155741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.156105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.156145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.156493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.156533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.156889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.156929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.157322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.157361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.157676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.157716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.157980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.157992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.158245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.158257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.158538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.158550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.158741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.158754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.159036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.159049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.159285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.159326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.159691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.159731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.159980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.160003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.160316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.160363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.160611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.160652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.161026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.161065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.161459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.161498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.161880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.161920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.162236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.162276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.162511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.162550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.356 qpair failed and we were unable to recover it. 00:40:39.356 [2024-06-10 11:49:04.162853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.356 [2024-06-10 11:49:04.162893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.163264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.163304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.163649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.163690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.163912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.163924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.164209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.164221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.164462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.164501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.164875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.164916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.165300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.165340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.165720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.165761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.166045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.166057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.166269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.166281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.166520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.166560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.166941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.166980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.167348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.167388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.167735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.167749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.168034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.168046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.168218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.168231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.168415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.168427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.168731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.168744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.168990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.169002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.169294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.169344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.169643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.169683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.169967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.169980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.170301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.170340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.170567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.170617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.170894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.170935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.171213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.171253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.171624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.171665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.171967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.172006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.172277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.172317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.172611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.357 [2024-06-10 11:49:04.172653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.357 qpair failed and we were unable to recover it. 00:40:39.357 [2024-06-10 11:49:04.172953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.172992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.173350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.173389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.173711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.173752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.174055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.174095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.174462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.174502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.174921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.174961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.175312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.175351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.175717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.175757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.176123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.176136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.176298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.176311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.176567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.176619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.176914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.176953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.177232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.177272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.177637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.177678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.177954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.177993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.178158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.178198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.178547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.178595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.178838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.178878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.179122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.179162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.179505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.179545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.179928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.179969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.180343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.180382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.180678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.180719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.181084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.181124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.181475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.181513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.181807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.181819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.182123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.182163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.182526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.182566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.182861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.182873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.183167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.183212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.183560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.183611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.183955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.183995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.184384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.184423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.184792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.184833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.185129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.185141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.185504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.358 [2024-06-10 11:49:04.185543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.358 qpair failed and we were unable to recover it. 00:40:39.358 [2024-06-10 11:49:04.185926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.185938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.186229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.186269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.186566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.186613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.186844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.186884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.187100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.187112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.187286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.187298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 EAL: No free 2048 kB hugepages reported on node 1 00:40:39.359 [2024-06-10 11:49:04.187595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.187643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.187941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.187980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.188366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.188378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.188662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.188675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.188848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.188860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.189172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.189211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.189490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.189530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.189888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.189900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.190201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.190213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.190524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.190535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.190787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.190799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.191017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.191057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.191382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.191421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.191760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.191773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.192009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.192021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.192325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.192337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.192618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.192631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.192810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.192822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.192996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.193008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.193254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.193266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.193568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.193585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.193768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.193781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.194060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.194072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.194332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.194343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.194520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.194532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.194740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.194753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.194985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.194997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.195098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.195110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.359 [2024-06-10 11:49:04.195293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.359 [2024-06-10 11:49:04.195316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.359 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.195555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.195567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.195789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.195802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.196036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.196048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.196213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.196225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.196387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.196399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.196566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.196588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.196806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.196818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.197050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.197062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.197290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.197302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.197526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.197538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.197760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.197773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.197879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.197893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.198115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.198127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.198434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.198446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.198602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.198614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.198771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.198783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.199094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.199107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.199340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.199351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.199442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.199454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.199620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.199632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.199849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.360 [2024-06-10 11:49:04.199861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.360 qpair failed and we were unable to recover it. 00:40:39.360 [2024-06-10 11:49:04.200094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.200106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.200334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.200346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.200588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.200601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.200821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.200833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.201120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.201143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.201448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.201460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.201741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.201754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.201985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.201998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.202277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.202291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.202571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.202588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.202950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.202962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.203188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.203200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.203484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.203496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.203805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.203818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.204052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.204064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.204382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.204395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.204674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.204686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.204993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.205005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.205227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.205239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.205481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.205494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.205661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.205673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.205896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.205908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.206147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.206160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.206454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.206467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.206769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.206781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.207011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.207024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.207315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.207327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.207489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.207502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.207741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.207753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.207970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.207983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.208214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.208228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.208391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.208404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.208566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.208588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.208755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.208767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.209050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.209062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.209366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.209378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.209683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.209695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.361 [2024-06-10 11:49:04.209930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.361 [2024-06-10 11:49:04.209943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.361 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.210268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.210280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.210533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.210546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.210732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.210744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.210994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.211006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.211172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.211184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.211420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.211433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.211718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.211730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.211947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.211959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.212205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.212218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.212435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.212448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.212759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.212771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.213055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.213068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.213341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.213353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.213587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.213599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.213840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.213853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.214142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.214154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.214381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.214393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.214670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.214682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.214863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.214875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.215180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.215193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.215429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.215442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.215681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.215693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.215840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.215852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.216080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.216093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.216306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.216319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.216647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.216660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.216899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.216911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.217201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.217213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.217519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.217531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.217677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.217690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.217852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.217864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.218144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.218156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.218391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.218406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.218706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.218718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.218884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.218895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.219112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.219125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.219408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.362 [2024-06-10 11:49:04.219420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.362 qpair failed and we were unable to recover it. 00:40:39.362 [2024-06-10 11:49:04.219654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.219666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.219817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.219829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.220134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.220147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.220369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.220381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.220587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.220600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.220826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.220838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.221049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.221061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.221356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.221368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.221598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.221610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.221845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.221857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.222067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.222079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.222380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.222392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.222674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.222686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.222910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.222923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.223102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.223114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.223279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.223291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.223572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.223589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.223831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.223844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.224132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.224144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.224359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.224372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.224673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.224685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.224936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.224948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.225250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.225263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.225503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.225515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.225798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.225810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.226052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.226065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.226277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.226290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.226455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.226467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.226709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.226721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.226932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.226944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.227272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.227284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.227460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.227472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.227747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.227760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.227993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.228006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.228312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.228325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.228606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.228620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.228862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.228874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.229133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.229145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.363 qpair failed and we were unable to recover it. 00:40:39.363 [2024-06-10 11:49:04.229397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.363 [2024-06-10 11:49:04.229409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.229711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.229724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.230027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.230039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.230148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.230161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.230392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.230404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.230721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.230734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.231015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.231027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.231336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.231349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.231463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.231475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.231639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.231651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.231956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.231969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.232202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.232215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.232466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.232478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.232692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.232705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.232929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.232942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.233100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.233112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.233421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.233433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.233650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.233663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.233944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.233956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.234180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.234192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.234364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.234376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.234630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.234642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.234801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.234813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.235099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.235112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.235395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.235407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.235711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.235723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.235951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.235963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.236257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.236269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.236598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.236611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.236851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.236863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.237166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.237178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.237407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.237420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.237663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.237675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.237891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.237903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.238065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.238077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.238207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.238219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.238392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.238404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.238629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.238642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.364 qpair failed and we were unable to recover it. 00:40:39.364 [2024-06-10 11:49:04.238805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.364 [2024-06-10 11:49:04.238817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.239049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.239061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.239367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.239379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.239612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.239625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.239851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.239863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.239958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.239970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.240148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.240160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.240326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.240339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.240586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.240599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.240850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.240862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.241145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.241158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.241395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.241407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.241673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.241686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.241972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.241985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.242301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.242313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.242538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.242550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.242791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.242804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.243110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.243122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.243300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.243313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.243557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.243569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.243816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.243829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.244046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.244059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.244294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.244307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.244534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.244547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.244758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.244771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.245055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.245068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.245294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.245317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.245548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.245561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.245787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.245800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.246082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.246094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.246310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.246323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.365 qpair failed and we were unable to recover it. 00:40:39.365 [2024-06-10 11:49:04.246547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.365 [2024-06-10 11:49:04.246560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.246801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.246814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.247027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.247040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.247271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.247283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.247598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.247611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.247828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.247840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.248122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.248134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.248383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.248395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.248703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.248715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.248992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.249005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.249336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.249348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.249510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.249522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.249766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.249779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.250012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.250024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.250205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.250218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.250516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.250528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.250696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.250708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.250927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.250940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.251104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.251115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.251402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.251414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.251579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.251592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.251845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.251857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.252149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.252162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.252458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.252471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.252710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.252722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.253058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.253070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.253285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.253298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.253463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.253475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.253807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.253819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.254104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.254117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.254283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.254296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.254456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.254469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.254651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.254663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.254969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.254982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.255264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.255276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.255582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.255597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.255777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.255789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.256014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.256026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.366 [2024-06-10 11:49:04.256264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.366 [2024-06-10 11:49:04.256276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.366 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.256426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.256438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.256650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.256663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.256966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.256978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.257109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:39.367 [2024-06-10 11:49:04.257140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.257153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.257330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.257342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.257571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.257587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.257816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.257829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.258110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.258123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.258432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.258444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.258689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.258704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.258982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.258995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.259170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.259183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.259418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.259430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.259737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.259750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.259967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.259979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.260147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.260159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.260329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.260341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.260624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.260638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.260944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.260958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.261263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.261276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.261509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.261522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.261741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.261754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.261981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.261994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.262226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.262239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.262404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.262416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.262632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.262644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.262827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.262840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.263142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.263155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.263445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.263457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.263677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.263690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.263995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.264010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.264363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.264376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.264668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.264682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.264939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.264952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.265257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.265272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.265524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.265539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.265778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.265791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.367 qpair failed and we were unable to recover it. 00:40:39.367 [2024-06-10 11:49:04.265923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.367 [2024-06-10 11:49:04.265935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.266160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.266173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.266405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.266418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.266665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.266678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.266911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.266924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.267101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.267114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.267343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.267358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.267670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.267683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.267964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.267976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.268203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.268216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.268546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.268559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.268806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.268819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.269071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.269086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.269318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.269331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.269560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.269573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.269881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.269893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.270150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.270163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.270465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.270478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.270676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.270688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.270919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.270931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.271236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.271248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.271369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.271381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.271625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.271637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.271813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.271825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.272105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.272118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.272351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.272364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.272599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.272612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.272919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.272931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.273216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.273229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.273336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.273348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.273582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.273594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.273765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.273777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.274054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.274067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.274230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.274242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.274480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.274492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.274798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.274811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.275110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.275123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.275374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.275386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.368 [2024-06-10 11:49:04.275671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.368 [2024-06-10 11:49:04.275683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.368 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.275943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.275956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.276176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.276189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.276486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.276499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.276735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.276748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.277016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.277029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.277330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.277342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.277593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.277606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.277889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.277902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.278183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.278195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.278497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.278510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.278740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.278754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.278992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.279004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.279217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.279230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.279480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.279495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.279657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.279670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.279951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.279963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.280125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.280137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.280415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.280428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.280641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.280653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.280889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.280902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.281080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.281091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.281316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.281330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.281548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.281560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.281880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.281893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.282126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.282139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.282288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.282300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.282463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.282475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.282770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.282783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.282997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.283009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.283260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.283273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.283500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.283512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.283742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.283755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.283919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.283931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.284213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.284225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.284461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.284473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.284722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.284735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.284947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.284959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.369 qpair failed and we were unable to recover it. 00:40:39.369 [2024-06-10 11:49:04.285126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.369 [2024-06-10 11:49:04.285139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.285369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.285381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.285683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.285696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.285882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.285894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.286220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.286232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.286511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.286523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.286778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.286790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.287101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.287113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.287394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.287406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.287716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.287729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.287893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.287905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.288140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.288152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.288368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.288380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.288515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.288527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.288769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.288781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.289063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.289076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.289239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.289253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.289502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.289514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.289796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.289809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.290044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.290056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.290246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.290258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.290478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.290490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.290712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.290724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.290961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.290974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.291087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.291099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.291328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.291340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.291573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.291589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.291768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.291780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.292010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.292022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.292254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.292266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.292432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.292444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.292724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.292737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.292972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.292984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.293291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.370 [2024-06-10 11:49:04.293303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.370 qpair failed and we were unable to recover it. 00:40:39.370 [2024-06-10 11:49:04.293556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.293567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.293864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.293877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.294192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.294204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.294509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.294521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.294733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.294746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.294997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.295010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.295303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.295315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.295597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.295609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.295864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.295877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.296192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.296209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.296432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.296447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.296691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.296708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.296993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.297009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.297267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.297282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.297504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.297519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.297771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.297784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.298067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.298081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.298383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.298397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.298633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.298647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.298885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.298902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.299231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.299248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.299559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.299574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.299864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.299884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.300170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.300183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.300348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.300361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.300520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.300532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.300830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.300844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.301083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.301096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.301326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.301339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.301561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.301580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.301812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.301825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.302118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.302131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.302310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.302323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.302499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.302512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.302810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.302825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.302981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.302993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.371 [2024-06-10 11:49:04.303224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.371 [2024-06-10 11:49:04.303238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.371 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.303470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.303483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.303768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.303781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.304009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.304022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.304255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.304268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.304429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.304442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.304680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.304693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.304921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.304933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.305081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.305093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.305343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.305356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.305695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.305708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.305875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.305889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.306124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.306136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.306433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.306445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.306722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.306735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.306960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.306972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.307281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.307293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.307507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.307519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.307860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.307872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.308059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.308071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.308315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.308328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.308590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.308603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.308779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.308792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.309113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.309126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.309379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.309391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.309695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.309708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.310008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.310023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.310330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.310342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.310645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.310657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.310845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.310857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.311092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.311104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.311362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.311375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.311619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.311631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.311896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.311908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.312137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.312149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.312415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.312427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.312725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.312737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.313012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.313025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.372 qpair failed and we were unable to recover it. 00:40:39.372 [2024-06-10 11:49:04.313280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.372 [2024-06-10 11:49:04.313292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.313571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.313587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.313899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.313912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.314197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.314209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.314522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.314534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.314845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.314857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.315095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.315107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.315354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.315366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.315694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.315706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.315897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.315909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.316244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.316256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.316558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.316570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.316808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.316820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.317102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.317114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.317421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.317433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.317602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.317615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.317919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.317931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.318191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.318204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.318457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.318469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.318780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.318793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.319032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.319046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.319279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.319291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.319597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.319610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.319921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.319933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.320253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.320266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.320566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.320588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.320841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.320855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.321162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.321175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.321415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.321429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.321734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.321747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.322030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.322043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.322357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.322369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.322657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.322670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.322899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.322911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.323136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.323148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.323477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.323489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.323831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.323844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.324174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.324186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.373 [2024-06-10 11:49:04.324467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.373 [2024-06-10 11:49:04.324479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.373 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.324750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.324763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.324999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.325011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.325241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.325253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.325559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.325571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.325878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.325890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.326194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.326206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.326440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.326452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.326777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.326789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.327000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.327012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.327324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.327337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.327597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.327609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.327911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.327923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.328161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.328173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.328472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.328484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.328801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.328813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.329113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.329125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.329369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.329381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.329548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.329560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.329876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.329889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.330127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.330139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.330446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.330458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.330708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.330720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.330946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.330958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.331191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.331203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.331499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.331511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.331822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.331835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.332194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.332206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.332442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.332454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.374 [2024-06-10 11:49:04.332690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.374 [2024-06-10 11:49:04.332702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.374 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.332934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.332949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.333184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.333196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.333525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.333537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.333838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.333850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.334099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.334111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.334349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.334363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.334587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.334601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.334886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.334899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.335184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.335196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.335444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.335456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.335690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.335702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.335928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.335940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.336237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.336249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.336559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.336571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.336884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.336896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.337199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.337212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.337446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.337460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.337741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.337754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.338043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.338056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.338344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.338356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.338633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.338645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.338911] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:39.375 [2024-06-10 11:49:04.338950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.338949] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:39.375 [2024-06-10 11:49:04.338962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 [2024-06-10 11:49:04.338964] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.338976] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:39.375 [2024-06-10 11:49:04.338986] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:39.375 [2024-06-10 11:49:04.339116] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:40:39.375 [2024-06-10 11:49:04.339291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.339303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.339159] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:40:39.375 [2024-06-10 11:49:04.339270] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:40:39.375 [2024-06-10 11:49:04.339270] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:40:39.375 [2024-06-10 11:49:04.339555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.339567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.339868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.339881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.340162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.340174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.340503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.340515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.340797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.340810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.341061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.341073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.341379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.341391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.341608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.341621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.341914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.341926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.342111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.342124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.342436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.342448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.375 [2024-06-10 11:49:04.342692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.375 [2024-06-10 11:49:04.342705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.375 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.342902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.342915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.343197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.343210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.343518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.343533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.343850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.343863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.344079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.344091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.344351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.344363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.344670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.344683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.345013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.345025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.345188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.345201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.345486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.345499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.345846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.345859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.346190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.346202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.346506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.346519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.346771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.346784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.347009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.347021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.347299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.347312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.347552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.347565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.347895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.347908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.348215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.348227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.348475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.348487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.348792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.348805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.349040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.349052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.349232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.349245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.349409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.349422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.349713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.349726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.350069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.350082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.350316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.350328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.350563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.350580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.350891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.350904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.351210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.351224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.351527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.351540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.351709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.351722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.376 [2024-06-10 11:49:04.351960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.376 [2024-06-10 11:49:04.351972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.376 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.352210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.352223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.352529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.352542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.352846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.352859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.353093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.353105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.353405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.353418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.353630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.353644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.353861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.353874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.354178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.354192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.354524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.354537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.354883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.354899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.355175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.355188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.355466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.355479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.355741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.355754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.356006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.356019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.356248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.356261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.356546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.356559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.356874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.356888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.357189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.357201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.357509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.357523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.357827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.357841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.358141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.358154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.358462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.358476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.358715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.358730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.359037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.359050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.359290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.359303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.359601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.359615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.359931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.359944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.360247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.360261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.360507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.360520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.360767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.360780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.361065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.361078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.361394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.361408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.361733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.361745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.362098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.362112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.362339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.362351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.362649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.362662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.362985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.362998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.377 [2024-06-10 11:49:04.363303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.377 [2024-06-10 11:49:04.363315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.377 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.363625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.363638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.363927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.363940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.364198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.364211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.364462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.364475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.364711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.364724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.365005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.365018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.365348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.365361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.365582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.365595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.365908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.365921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.366148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.366162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.366448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.366461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.366791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.366807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.367043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.367056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.367270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.367283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.367523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.367536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.367856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.367870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.368152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.368165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.368400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.368412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.368637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.368650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.368959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.368971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.369252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.369265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.369587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.369600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.369886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.369900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.370207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.370219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.370523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.370536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.370825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.370838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.371050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.371064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.371314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.371326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.371488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.371501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.371737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.371750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.371988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.372000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.372295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.372307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.372611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.372624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.372912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.372925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.373158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.378 [2024-06-10 11:49:04.373172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.378 qpair failed and we were unable to recover it. 00:40:39.378 [2024-06-10 11:49:04.373390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.373403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.373733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.373746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.373987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.374000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.374303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.374316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.374620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.374633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.374937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.374950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.375192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.375205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.375488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.375502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.375807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.375821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.376129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.376142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.376435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.376447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.376752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.376765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.377076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.377088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.377369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.377381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.377596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.377608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.377920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.377932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.378244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.378260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.378569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.378584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.378887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.378900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.379201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.379213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.379430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.379442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.379774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.379786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.380090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.380102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.380430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.380443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.380794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.380806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.381132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.381143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.381369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.381381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.381627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.381639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.381943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.379 [2024-06-10 11:49:04.381955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.379 qpair failed and we were unable to recover it. 00:40:39.379 [2024-06-10 11:49:04.382238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.382250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.382567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.382583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.382857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.382869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.383179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.383192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.383419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.383431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.383756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.383768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.384101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.384113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.384335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.384347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.384560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.384572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.384888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.384900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.385151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.385164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.385461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.385473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.385784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.385797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.386059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.386071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.386386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.386437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.386771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.386795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.387140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.387159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.387421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.387435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.387744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.387757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.388038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.388051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.388307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.388319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.388536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.388549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.388730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.388743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.389024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.389036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.389264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.389276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.389514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.389526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.389742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.389755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.390039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.390054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.390358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.390371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.390669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.390683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.390992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.391006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.391228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.391241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.391542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.391557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.391824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.391839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.392150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.392163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.380 [2024-06-10 11:49:04.392393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.380 [2024-06-10 11:49:04.392407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.380 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.392688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.392701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.392930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.392942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.393245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.393257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.393568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.393584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.393818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.393830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.394114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.394126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.394442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.394455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.394818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.394833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.395143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.395157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.395451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.395464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.395766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.395782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.396033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.396046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.396282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.396296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.396584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.396599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.396901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.396913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.397203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.397216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.397432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.397444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.397757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.397770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.398087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.398100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.398382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.398394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.398669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.398681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.398961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.398973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.399277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.399289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.399609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.399621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.399878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.399890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.400200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.400212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.400510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.400522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.400737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.400750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.401032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.401044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.401345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.401356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.401660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.401673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.401930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.401945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.402248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.402260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.402498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.402510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.402812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.402824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.381 qpair failed and we were unable to recover it. 00:40:39.381 [2024-06-10 11:49:04.403136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.381 [2024-06-10 11:49:04.403148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.403429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.403442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.403676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.403688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.403909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.403920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.404267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.404279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.404565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.404582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.404811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.404823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.405130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.405142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.405495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.405507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.405837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.405850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.406191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.406203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.406387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.406399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.406629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.406641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.406967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.406979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.407226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.407238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.407544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.407557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.407845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.407857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.408138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.408150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.408434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.408446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.408749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.408762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.409059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.409071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.409302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.409314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.409616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.409628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.409887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.409900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.410212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.410224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.410453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.410465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.410697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.410709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.410949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.410961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.411267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.411279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.411595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.411608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.411856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.411869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.412168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.412180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.412463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.412475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.412774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.412787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.413037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.413049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.413274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.413285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.413616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.413630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.413934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.382 [2024-06-10 11:49:04.413946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.382 qpair failed and we were unable to recover it. 00:40:39.382 [2024-06-10 11:49:04.414203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.414215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.414522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.414535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.414818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.414831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.415145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.415157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.415410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.415422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.415717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.415729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.416015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.416027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.416334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.416346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.416659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.416672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.416976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.416988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.417271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.417283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.417537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.417549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.417863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.417875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.418164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.418176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.418481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.418493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.418738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.418750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.419050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.419063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.419291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.419303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.419606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.419618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.419901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.419913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.420152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.420164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.420463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.420475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.420803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.420816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.421164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.421176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.421503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.421515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.421802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.421815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.422027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.422039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.422253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.422266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.422579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.422592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.422844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.422856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.423145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.423157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.423463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.423475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.423786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.423798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.424103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.424114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.424341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.424353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.424610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.383 [2024-06-10 11:49:04.424622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.383 qpair failed and we were unable to recover it. 00:40:39.383 [2024-06-10 11:49:04.424852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.424865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.425174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.425186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.425494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.425508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.425812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.425824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.426131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.426143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.426447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.426459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.426766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.426778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.427067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.427079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.427359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.427371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.427681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.427693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.427918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.427930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.428143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.428155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.428460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.428471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.428655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.428668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.428928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.428940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.429195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.429207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.429530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.429542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.429848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.429860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.430164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.430176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.430420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.430432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.430604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.430616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.430938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.430951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.431247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.431260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.431562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.431574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.431883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.431895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.432202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.432214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.432465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.432477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.432771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.432784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.433000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.433013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.433293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.433305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.433612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.433624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.433946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.384 [2024-06-10 11:49:04.433958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.384 qpair failed and we were unable to recover it. 00:40:39.384 [2024-06-10 11:49:04.434172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.434184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.434484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.434496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.434797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.434809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.435114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.435126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.435430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.435442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.435776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.435788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.436134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.436147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.436382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.436394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.436628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.436641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.436945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.436957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.437241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.437255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.437572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.437588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.437825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.437837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.438051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.438063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.438302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.438314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.438624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.438637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.438939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.438951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.439260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.439272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.439580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.439593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.439875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.439887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.440129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.440141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.440443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.440455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.440632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.440644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.440955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.440967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.441275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.441288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.441613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.441625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.441855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.441867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.442169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.442181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.442431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.442443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.442746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.442759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.443062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.443074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.385 [2024-06-10 11:49:04.443377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.385 [2024-06-10 11:49:04.443389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.385 qpair failed and we were unable to recover it. 00:40:39.658 [2024-06-10 11:49:04.443698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.658 [2024-06-10 11:49:04.443710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.658 qpair failed and we were unable to recover it. 00:40:39.658 [2024-06-10 11:49:04.444014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.658 [2024-06-10 11:49:04.444026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.658 qpair failed and we were unable to recover it. 00:40:39.658 [2024-06-10 11:49:04.444308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.658 [2024-06-10 11:49:04.444320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.658 qpair failed and we were unable to recover it. 00:40:39.658 [2024-06-10 11:49:04.444638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.658 [2024-06-10 11:49:04.444651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.658 qpair failed and we were unable to recover it. 00:40:39.658 [2024-06-10 11:49:04.444953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.658 [2024-06-10 11:49:04.444965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.658 qpair failed and we were unable to recover it. 00:40:39.658 [2024-06-10 11:49:04.445337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.658 [2024-06-10 11:49:04.445371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.445703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.445724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.446045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.446064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.446382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.446402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.446648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.446668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.446992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.447011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.447348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.447362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.447692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.447705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.448052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.448064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.448284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.448296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.448462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.448474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.448721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.448733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.449043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.449055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.449353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.449365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.449674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.449686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.449936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.449948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.450250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.450262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.450518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.450530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.450833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.450845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.451071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.451083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.451362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.451374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.451678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.451690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.451990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.452002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.452308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.452319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.452538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.452550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.452846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.452859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.453090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.453102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.453407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.453420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.453673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.453685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.453994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.454006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.454309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.454321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.454653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.454665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.455015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.455027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.455310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.455322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.455548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.455560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.455847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.455859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.456090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.456102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.456335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.456347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.659 [2024-06-10 11:49:04.456678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.659 [2024-06-10 11:49:04.456691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.659 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.456980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.456992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.457278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.457293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.457584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.457596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.457824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.457836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.458169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.458182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.458490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.458502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.458734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.458746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.459053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.459066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.459367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.459379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.459602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.459614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.459942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.459954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.460256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.460268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.460510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.460522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.460805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.460817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.461126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.461138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.461452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.461464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.461764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.461776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.461992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.462004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.462284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.462296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.462583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.462595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.462901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.462913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.463234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.463246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.463544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.463556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.463868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.463881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.464160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.464172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.464422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.464434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.464657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.464670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.464995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.465007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.465243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.465256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.465493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.465505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.465832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.465843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.466031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.466043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.466254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.466266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.466423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.466435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.466741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.466754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.467051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.467063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.467307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.467319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.660 qpair failed and we were unable to recover it. 00:40:39.660 [2024-06-10 11:49:04.467602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.660 [2024-06-10 11:49:04.467614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.467930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.467942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.468266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.468279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.468590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.468603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.468761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.468775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.469011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.469023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.469186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.469199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.469503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.469516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.469825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.469837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.470071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.470083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.470391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.470403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.470707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.470719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.471024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.471036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.471335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.471347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.471652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.471664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.471967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.471979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.472284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.472296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.472548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.472560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.472904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.472916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.473203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.473215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.473500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.473512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.473820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.473832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.474131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.474143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.474440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.474452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.474755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.474767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.474981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.474993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.475248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.475260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.475514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.475526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.475831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.475843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.476145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.476158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.476441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.476453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.476770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.476783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.477039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.477051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.477340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.477353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.477567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.477583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.477876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.477888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.478179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.478191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.478440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.478452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.661 [2024-06-10 11:49:04.478682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.661 [2024-06-10 11:49:04.478694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.661 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.478997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.479009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.479291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.479303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.479485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.479497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.479828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.479841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.480134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.480146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.480445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.480471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.480728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.480741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.481041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.481053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.481361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.481373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.481679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.481691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.481964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.481977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.482278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.482290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.482625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.482637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.482985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.482997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.483323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.483335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.483646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.483658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.483913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.483925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.484228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.484240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.484548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.484561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.484897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.484974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1207fc0 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.485305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.485329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.485657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.485677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.485912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.485925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.486231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.486243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.486555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.486567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.486876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.486888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.487213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.487225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.487485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.487497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.487790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.487802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.488026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.488038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.488368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.488381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.488687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.488699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.488952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.488964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.489276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.489288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.489590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.489602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.489907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.489919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.490230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.490242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.490481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.662 [2024-06-10 11:49:04.490493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.662 qpair failed and we were unable to recover it. 00:40:39.662 [2024-06-10 11:49:04.490798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.490810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.491042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.491054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.491302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.491314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.491625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.491637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.491942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.491954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.492237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.492249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.492566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.492582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.492921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.492935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.493154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.493166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.493472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.493484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.493763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.493776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.494029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.494041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.494353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.494366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.494650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.494662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.494981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.494993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.495228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.495240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.495466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.495478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.495782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.495794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.496102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.496114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.496430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.496442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.496761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.496774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.497104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.497116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.497407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.497418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.497722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.497734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.498050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.498062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.498296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.498309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.498590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.498602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.498908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.498920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.499210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.499222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.663 [2024-06-10 11:49:04.499455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.663 [2024-06-10 11:49:04.499467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.663 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.499752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.499765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.500067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.500079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.500394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.500406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.500706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.500718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.500885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.500897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.501227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.501239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.501561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.501572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.501814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.501826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.502121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.502133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.502386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.502398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.502630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.502643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.502879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.502891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.503125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.503137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.503457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.503469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.503793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.503805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.504155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.504167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.504493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.504506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.504800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.504816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.505120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.505132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.505371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.505383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.505611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.505623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.505929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.505941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.506252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.506264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.506561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.506573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.506858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.506869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.507168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.507181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.507474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.507486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.507793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.507806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.508123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.508135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.508360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.508372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.508694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.508707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.509014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.509026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.509279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.509291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.509595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.509607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.509892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.509904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.510214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.510225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.510383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.510395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.664 [2024-06-10 11:49:04.510646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.664 [2024-06-10 11:49:04.510658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.664 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.510870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.510882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.511133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.511145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.511433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.511445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.511676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.511689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.511974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.511986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.512291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.512303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.512592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.512604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.512914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.512926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.513168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.513180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.513432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.513444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.513658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.513670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.513839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.513852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.514081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.514092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.514350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.514362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.514667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.514679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.514982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.514994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.515218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.515230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.515544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.515557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.515922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.515935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.516146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.516160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.516446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.516458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.516691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.516703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.516955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.516967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.517247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.517260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.517564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.517580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.517801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.517813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.518119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.518131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.518364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.518376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.518683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.518696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.519008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.519020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.519276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.519289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.519514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.519526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.519828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.519840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.520075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.520087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.520369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.520381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.520661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.520674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.520994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.521006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.521318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.665 [2024-06-10 11:49:04.521330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.665 qpair failed and we were unable to recover it. 00:40:39.665 [2024-06-10 11:49:04.521637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.521649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.521910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.521922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.522208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.522220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.522458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.522471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.522770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.522782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.523076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.523088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.523307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.523319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.523621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.523633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.523857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.523869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.524042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.524055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.524286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.524298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.524510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.524522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.524873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.524886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.525201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.525213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.525529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.525541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.525845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.525857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.526161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.526173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.526454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.526466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.526686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.526698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.526917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.526929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.527220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.527232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.527558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.527572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.527740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.527752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.527996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.528008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.528314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.528326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.528612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.528624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.528790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.528802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.529132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.529144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.529469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.529481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.529694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.529706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.530018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.530030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.530292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.530304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.530628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.530640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.530949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.530962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.531263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.531275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.531521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.531533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.531785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.531798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.532010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.532022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.666 qpair failed and we were unable to recover it. 00:40:39.666 [2024-06-10 11:49:04.532306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.666 [2024-06-10 11:49:04.532318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.532537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.532550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.532842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.532854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.533135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.533147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.533314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.533326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.533629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.533642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.533876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.533888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.534149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.534161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.534467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.534478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.534782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.534794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.535028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.535040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.535257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.535269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.535572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.535588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.535895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.535907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.536208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.536220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.536524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.536536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.536783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.536795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.537095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.537107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.537409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.537421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.537726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.537738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.537898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.537910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.538124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.538137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.538436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.538448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.538764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.538777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.539058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.539070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.539376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.539388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.539684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.539697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.540000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.540012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.540331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.540343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.540645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.540657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.540937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.540949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.541215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.541227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.541530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.541542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.541845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.541857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.542161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.542174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.667 qpair failed and we were unable to recover it. 00:40:39.667 [2024-06-10 11:49:04.542476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.667 [2024-06-10 11:49:04.542488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.668 qpair failed and we were unable to recover it. 00:40:39.668 [2024-06-10 11:49:04.542792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.668 [2024-06-10 11:49:04.542804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.668 qpair failed and we were unable to recover it. 00:40:39.668 [2024-06-10 11:49:04.543050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.668 [2024-06-10 11:49:04.543062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.668 qpair failed and we were unable to recover it. 00:40:39.668 [2024-06-10 11:49:04.543384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.668 [2024-06-10 11:49:04.543396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.668 qpair failed and we were unable to recover it. 00:40:39.668 [2024-06-10 11:49:04.543578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.668 [2024-06-10 11:49:04.543590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.668 qpair failed and we were unable to recover it. 00:40:39.668 [2024-06-10 11:49:04.543812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.668 [2024-06-10 11:49:04.543824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.668 qpair failed and we were unable to recover it. 00:40:39.668 [2024-06-10 11:49:04.544146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.668 [2024-06-10 11:49:04.544158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.668 qpair failed and we were unable to recover it. 00:40:39.668 [2024-06-10 11:49:04.544468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.668 [2024-06-10 11:49:04.544480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.668 qpair failed and we were unable to recover it. 00:40:39.668 [2024-06-10 11:49:04.544730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.668 [2024-06-10 11:49:04.544743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.668 qpair failed and we were unable to recover it. 00:40:39.668 [2024-06-10 11:49:04.545033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.668 [2024-06-10 11:49:04.545045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.668 qpair failed and we were unable to recover it. 00:40:39.668 [2024-06-10 11:49:04.545327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.668 [2024-06-10 11:49:04.545339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.668 qpair failed and we were unable to recover it. 00:40:39.668 [2024-06-10 11:49:04.545553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.668 [2024-06-10 11:49:04.545565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.668 qpair failed and we were unable to recover it. 00:40:39.668 [2024-06-10 11:49:04.545872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.668 [2024-06-10 11:49:04.545884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.668 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.546166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.546178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.546498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.546510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.546819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.546831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.547111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.547123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.547437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.547449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.547697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.547709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.548015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.548027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.548329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.548341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.548508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.548519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.548840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.548852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.549105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.549117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.549351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.549363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.549676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.549688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.549971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.549983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.550232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.550244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.550464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.550478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.550710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.550723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.551054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.551066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.551359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.551371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.551669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.551682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.551997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.552009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.552312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.552324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.552629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.552641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.552926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.552938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.553229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.553241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.553570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.553594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.553875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.553886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.554145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.554157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.554307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.554319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.554554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.554566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.554784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.554797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.555089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.555101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.555319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.555331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.555610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.555623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.555930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.555942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.556230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.556242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.556554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.556566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.556874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.669 [2024-06-10 11:49:04.556886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.669 qpair failed and we were unable to recover it. 00:40:39.669 [2024-06-10 11:49:04.557095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.557107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.557393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.557405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.557704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.557716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.558035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.558047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.558350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.558363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.558628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.558640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.558944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.558956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.559230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.559243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.559478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.559491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.559779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.559791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.560096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.560109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.560410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.560422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.560670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.560682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.560973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.560985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.561301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.561313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.561616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.561628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.561805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.561817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.562095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.562111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.562406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.562418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.562642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.562655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.562949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.562961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.563260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.563272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.563456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.563468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.563705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.563717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.563998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.564010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.564336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.564348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.564563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.564578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.564885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.564897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.565057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.565070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.565303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.565315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.565643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.565655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.565961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.565973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.566208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.566220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.566428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.566440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.566743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.566755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.567085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.567098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.567400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.567412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.567583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.670 [2024-06-10 11:49:04.567596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.670 qpair failed and we were unable to recover it. 00:40:39.670 [2024-06-10 11:49:04.567746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.567758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.568062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.568074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.568239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.568251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.568569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.568590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.568900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.568912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.569188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.569200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.569514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.569526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.569847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.569860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.570092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.570104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.570416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.570428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.570637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.570649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.570865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.570878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.571161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.571172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.571388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.571400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.571704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.571717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.571999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.572012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.572334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.572346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.572652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.572664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.572947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.572959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.573254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.573268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.573569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.573584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.573898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.573910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.574140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.574151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.574431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.574442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.574738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.574751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.575057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.575069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.575371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.575383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.575689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.575702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.575865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.575877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.576163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.576175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.576432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.576444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.576731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.576744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.577057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.577070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.577298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.577311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.577549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.577561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.577850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.577862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.578039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.578052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.578290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.578302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.671 [2024-06-10 11:49:04.578613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.671 [2024-06-10 11:49:04.578625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.671 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.578932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.578944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.579244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.579256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.579557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.579569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.579790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.579803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.580125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.580136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.580423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.580435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.580746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.580758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.581077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.581089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.581315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.581327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.581560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.581572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.581808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.581820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.582039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.582051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.582347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.582359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.582586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.582598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.582823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.582835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.583160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.583172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.583414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.583426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.583664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.583676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.583986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.583999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.584229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.584241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.584545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.584557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.584871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.584883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.585173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.585185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.585489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.585501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.585789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.585802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.586129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.586141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.586445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.586456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.586755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.586768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.587054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.587066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.587294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.587306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.587637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.587650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.587899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.587911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.588210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.588222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.588520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.588532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.588838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.588850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.589104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.589116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.589330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.589342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.589658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.672 [2024-06-10 11:49:04.589671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.672 qpair failed and we were unable to recover it. 00:40:39.672 [2024-06-10 11:49:04.589971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.589983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.590234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.590246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.590521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.590533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.590760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.590772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.591068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.591080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.591309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.591321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.591623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.591635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.591920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.591932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.592223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.592235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.592520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.592534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.592842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.592855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.593107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.593119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.593437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.593449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.593685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.593698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.593942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.593954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.594258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.594270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.594595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.594607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.594832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.594843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.595147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.595160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.595410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.595422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.595636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.595649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.595984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.595996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.596330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.596342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.596645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.596657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.596908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.596920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.597253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.597265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.597526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.597538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.597828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.597840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.598069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.598081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.598306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.598318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.598571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.598593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.598878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.598890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.599194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.673 [2024-06-10 11:49:04.599205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.673 qpair failed and we were unable to recover it. 00:40:39.673 [2024-06-10 11:49:04.599434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.599446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.599730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.599742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.600024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.600036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.600344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.600356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.600672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.600685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.600986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.600999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.601303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.601315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.601598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.601610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.601920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.601932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.602244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.602256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.602563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.602578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.602808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.602820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.603121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.603133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.603446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.603458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.603737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.603749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.604065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.604077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.604386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.604400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.604589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.604601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.604904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.604916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.605200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.605212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.605526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.605538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.605771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.605784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.605999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.606011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.606313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.606325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.606631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.606643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.606928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.606941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.607248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.607260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.607553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.607565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.607870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.607882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.608201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.608213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.608443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.608455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.608714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.608727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.608984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.608996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.609300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.609312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.609620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.609632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.609864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.609876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.610180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.610192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.610408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.610420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.674 [2024-06-10 11:49:04.610669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.674 [2024-06-10 11:49:04.610682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.674 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.610853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.610865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.611117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.611129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.611431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.611443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.611760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.611772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.612080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.612093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.612394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.612406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.612633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.612645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.612961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.612973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.613234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.613246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.613554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.613566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.613877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.613890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.614101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.614113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.614343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.614355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.614587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.614600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.614896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.614908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.615145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.615156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.615455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.615467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.615688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.615702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.615982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.615994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.616297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.616310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.616555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.616568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.616851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.616862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.617088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.617100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.617412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.617424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.617660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.617672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.617977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.617990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.618302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.618313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.618546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.618558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.618870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.618882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.619114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.619125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.619335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.619347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.619638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.619651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.619944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.619956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.620101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.620114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.620420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.620432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.620763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.620775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.621073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.621085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.675 [2024-06-10 11:49:04.621389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.675 [2024-06-10 11:49:04.621401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.675 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.621726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.621738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.622018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.622030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.622329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.622341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.622505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.622517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.622766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.622779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.622928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.622940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.623271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.623283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.623590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.623602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.623835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.623848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.624073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.624085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.624376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.624388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.624716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.624728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.625031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.625043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.625298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.625310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.625593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.625605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.625914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.625926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.626170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.626182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.626410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.626422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.626709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.626721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.626883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.626897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.627201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.627213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.627473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.627485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.627761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.627773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.628089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.628101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.628389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.628401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.628662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.628674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.628890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.628901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.629222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.629234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.629487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.629499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.629784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.629796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.630074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.630086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.630395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.630407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.630726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.630738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.631015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.631027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.631341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.631353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.631644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.631656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.631915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.631927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.632227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.676 [2024-06-10 11:49:04.632238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.676 qpair failed and we were unable to recover it. 00:40:39.676 [2024-06-10 11:49:04.632529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.632541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.632849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.632862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.633167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.633179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.633463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.633475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.633699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.633712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.634005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.634017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.634291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.634304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.634608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.634620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.634921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.634934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.635242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.635254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.635572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.635589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.635775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.635787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.636069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.636081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.636306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.636318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.636613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.636625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.636950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.636962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.637266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.637278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.637582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.637595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.637873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.637885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.638184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.638195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.638498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.638510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.638839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.638853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.639205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.639217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.639543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.639555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.639865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.639878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.640111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.640123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.640436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.640448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.640673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.640685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.640993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.677 [2024-06-10 11:49:04.641005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.677 qpair failed and we were unable to recover it. 00:40:39.677 [2024-06-10 11:49:04.641302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.641314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.641542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.641554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.641781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.641793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.642115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.642127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.642433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.642445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.642727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.642739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.642997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.643009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.643323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.643335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.643646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.643658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.643957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.643969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.644282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.644294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.644574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.644594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.644900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.644912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.645220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.645232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.645535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.645547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.645850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.645863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.646171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.646183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.646424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.646436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.646663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.646675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.647009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.647021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.647312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.647324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.647482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.647494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.647732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.647745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.648069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.648081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.648386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.648398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.648731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.648744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.649074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.649085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.649317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.649329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.649564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.649580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.649821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.649833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.650143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.678 [2024-06-10 11:49:04.650155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.678 qpair failed and we were unable to recover it. 00:40:39.678 [2024-06-10 11:49:04.650461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.650473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.650776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.650790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.651015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.651027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.651208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.651220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.651524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.651536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.651825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.651838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.652127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.652139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.652365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.652377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.652677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.652689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.652997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.653009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.653240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.653252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.653507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.653519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.653828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.653840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.654142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.654154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.654456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.654468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.654775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.654788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.655094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.655106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.655412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.655424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.655725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.655738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.655954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.655966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.656268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.656280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.656584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.656597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.656899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.656911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.657219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.657231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.657535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.657547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.657850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.657862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.658160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.658172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.658477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.658489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.658739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.658751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.658966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.658978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.659285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.659297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.659552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.659564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.659869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.659881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.660124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.660136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.660415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.660427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.660678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.660690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.660973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.660985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.661296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.679 [2024-06-10 11:49:04.661308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.679 qpair failed and we were unable to recover it. 00:40:39.679 [2024-06-10 11:49:04.661527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.661539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.661870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.661883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.662124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.662136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.662437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.662451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.662679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.662691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.662966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.662978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.663278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.663290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.663617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.663630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.663875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.663887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.664170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.664182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.664458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.664470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.664772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.664784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.665010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.665022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.665329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.665341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.665569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.665584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.665894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.665905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.666121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.666133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.666467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.666479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.666783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.666795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.667125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.667138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.667373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.667385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.667629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.667641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.667966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.667978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.668326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.668337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.668656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.668668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.668954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.668966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.669182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.669193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.669466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.669478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.669788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.669800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.670126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.680 [2024-06-10 11:49:04.670138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.680 qpair failed and we were unable to recover it. 00:40:39.680 [2024-06-10 11:49:04.670418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.670430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.670657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.670669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.670901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.670913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.671254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.671267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.671415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.671427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.671709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.671721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.671950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.671962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.672264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.672276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.672584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.672597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.672901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.672913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.673197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.673209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.673461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.673473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.673708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.673720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.673955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.673969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.674273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.674285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.674542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.674554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.674793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.674806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.675087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.675099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.675384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.675395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.675554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.675566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.675869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.675882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.676184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.676196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.676430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.676442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.676700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.676712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.676966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.676978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.677271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.677283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.677516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.677528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.677856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.677869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.678210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.678222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.678521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.678533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.678838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.678850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.679083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.679095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.679270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.679282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.679582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.679594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.679816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.679828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.680157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.680169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.680474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.680486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.681 qpair failed and we were unable to recover it. 00:40:39.681 [2024-06-10 11:49:04.680788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.681 [2024-06-10 11:49:04.680801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.681058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.681070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.681374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.681386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.681673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.681686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.681987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.681999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.682299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.682311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.682614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.682626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.682942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.682954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.683259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.683271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.683573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.683589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.683892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.683904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.684152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.684164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.684466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.684478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.684772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.684785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.684999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.685011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.685317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.685329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.685659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.685673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.685975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.685987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.686297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.686309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.686618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.686631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.686871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.686883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.687061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.687073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.687324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.687336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.687642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.687654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.687884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.687897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.688127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.688139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.688368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.688380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.682 qpair failed and we were unable to recover it. 00:40:39.682 [2024-06-10 11:49:04.688720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.682 [2024-06-10 11:49:04.688732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.688963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.688975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.689217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.689229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.689466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.689478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.689786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.689799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.690077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.690088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.690321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.690333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.690562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.690578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.690726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.690738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.691036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.691048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.691231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.691243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.691452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.691464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.691683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.691696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.691997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.692010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.692324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.692336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.692642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.692655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.692940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.692951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.693197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.693209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.693505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.693517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.693798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.693810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.694043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.694055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.694337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.694349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.694600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.694612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.694937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.694949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.695183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.695195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.695504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.695516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.695824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.695836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.696050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.696062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.696388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.696399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.696705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.696720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.697021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.697033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.697265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.697276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.697554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.697574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.697850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.697864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.683 qpair failed and we were unable to recover it. 00:40:39.683 [2024-06-10 11:49:04.698076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.683 [2024-06-10 11:49:04.698089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.698416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.698428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.698656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.698668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.698984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.698996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.699304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.699316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.699545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.699557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.699868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.699881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.700190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.700202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.700440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.700452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.700760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.700773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.700984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.700997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.701301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.701313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.701641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.701654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.701957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.701969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.702146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.702158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.702465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.702477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.702786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.702798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.702973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.702985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.703286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.703298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.703624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.703637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.703949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.703962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.704257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.704270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.704592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.704605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.704911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.704923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.705206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.705218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.705533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.705546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.705851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.705865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.706170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.706183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.706402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.706414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.706677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.706690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.706992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.707004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.707309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.707322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.707505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.707517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.707823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.707835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.708201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.684 [2024-06-10 11:49:04.708213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.684 qpair failed and we were unable to recover it. 00:40:39.684 [2024-06-10 11:49:04.708475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.708489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.708669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.708682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.708987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.708999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.709256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.709269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.709485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.709497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.709804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.709817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.710000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.710012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.710239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.710251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.710401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.710413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.710657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.710670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.710970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.710983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.711238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.711251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.711489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.711502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.711736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.711751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.712040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.712053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.712382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.712395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.712645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.712660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.712911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.712925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.713097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.713109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.713336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.713348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.713667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.713679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.713821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.713834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.713992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.714004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.714284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.714296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.714549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.714561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.714850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.714862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.715167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.715180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.715493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.715505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.715742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.715754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.715966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.715978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.716236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.716248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.716567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.685 [2024-06-10 11:49:04.716583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.685 qpair failed and we were unable to recover it. 00:40:39.685 [2024-06-10 11:49:04.716880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.716893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.717222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.717234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.717501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.717513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.717726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.717739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.717976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.717988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.718290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.718302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.718563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.718579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.718869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.718881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.719159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.719174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.719484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.719497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.719824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.719836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.720084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.720096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.720396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.720409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.720562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.720584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.720805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.720818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.721070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.721083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.721300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.721312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.721545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.721557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.721815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.721829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.722135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.722147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.722475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.722489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.722745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.722758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.723000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.723012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.723252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.723264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.723583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.723604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.723865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.723877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.724047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.724059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.724247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.724259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.724564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.724581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.724816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.724828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.725002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.725014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.725176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.725188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.725439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.725452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.725727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.725739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.726003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.726015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.726185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.726198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.726438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.726451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.686 [2024-06-10 11:49:04.726781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.686 [2024-06-10 11:49:04.726794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.686 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.727024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.727036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.727369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.727381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.727647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.727659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.727888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.727900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.728204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.728216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.728525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.728537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.728850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.728862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.729093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.729105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.729395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.729408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.729735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.729748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.730030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.730045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.730293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.730306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.730608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.730621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.730880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.730892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.731055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.731067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.731364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.731376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.731664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.731677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.731922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.731934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.732121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.732133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.732463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.732475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.732730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.732742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.732954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.732966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.733137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.733149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.733401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.733413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.733728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.733741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.733925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.733937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.734167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.734180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.734433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.734446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.734672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.734684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.734868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.734879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.687 qpair failed and we were unable to recover it. 00:40:39.687 [2024-06-10 11:49:04.735102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.687 [2024-06-10 11:49:04.735114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.735384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.735396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.735698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.735710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.735938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.735950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.736114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.736126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.736408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.736420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.736749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.736762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.736954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.736968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.737208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.737220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.737455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.737467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.737770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.737783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.737997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.738009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.738225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.738237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.738483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.738495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.738815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.738828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.739095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.739107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.739412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.739424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.739735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.739747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.739980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.739992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.740293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.740305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.740587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.740599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.740861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.740873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.741086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.741098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.741399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.741412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.741697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.741710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.741960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.741972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.742248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.742260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.742500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.742512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.742743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.742755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.742941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.742954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.743279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.743291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.743505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.743517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.743820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.743832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.744140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.744152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.744422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.744434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.744662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.744674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.744982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.744994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.745291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.745303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.688 [2024-06-10 11:49:04.745633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.688 [2024-06-10 11:49:04.745645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.688 qpair failed and we were unable to recover it. 00:40:39.689 [2024-06-10 11:49:04.745952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.689 [2024-06-10 11:49:04.745964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.689 qpair failed and we were unable to recover it. 00:40:39.689 [2024-06-10 11:49:04.746208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.689 [2024-06-10 11:49:04.746220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.689 qpair failed and we were unable to recover it. 00:40:39.689 [2024-06-10 11:49:04.746522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.689 [2024-06-10 11:49:04.746534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.689 qpair failed and we were unable to recover it. 00:40:39.689 [2024-06-10 11:49:04.746808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.689 [2024-06-10 11:49:04.746820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.689 qpair failed and we were unable to recover it. 00:40:39.689 [2024-06-10 11:49:04.747055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.689 [2024-06-10 11:49:04.747068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.689 qpair failed and we were unable to recover it. 00:40:39.689 [2024-06-10 11:49:04.747322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.689 [2024-06-10 11:49:04.747334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.689 qpair failed and we were unable to recover it. 00:40:39.689 [2024-06-10 11:49:04.747550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.689 [2024-06-10 11:49:04.747562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.689 qpair failed and we were unable to recover it. 00:40:39.689 [2024-06-10 11:49:04.747882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.965 [2024-06-10 11:49:04.747894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.965 qpair failed and we were unable to recover it. 00:40:39.965 [2024-06-10 11:49:04.748131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.965 [2024-06-10 11:49:04.748145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.965 qpair failed and we were unable to recover it. 00:40:39.965 [2024-06-10 11:49:04.748383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.965 [2024-06-10 11:49:04.748395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.965 qpair failed and we were unable to recover it. 00:40:39.965 [2024-06-10 11:49:04.748703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.965 [2024-06-10 11:49:04.748715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.965 qpair failed and we were unable to recover it. 00:40:39.965 [2024-06-10 11:49:04.748973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.965 [2024-06-10 11:49:04.748985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.965 qpair failed and we were unable to recover it. 00:40:39.965 [2024-06-10 11:49:04.749279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.965 [2024-06-10 11:49:04.749291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.965 qpair failed and we were unable to recover it. 00:40:39.965 [2024-06-10 11:49:04.749469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.965 [2024-06-10 11:49:04.749482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.965 qpair failed and we were unable to recover it. 00:40:39.965 [2024-06-10 11:49:04.749770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.965 [2024-06-10 11:49:04.749783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.965 qpair failed and we were unable to recover it. 00:40:39.965 [2024-06-10 11:49:04.750023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.965 [2024-06-10 11:49:04.750035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.965 qpair failed and we were unable to recover it. 00:40:39.965 [2024-06-10 11:49:04.750335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.965 [2024-06-10 11:49:04.750348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.965 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.750631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.750643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.750912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.750924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.751229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.751241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.751523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.751536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.751851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.751864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.752196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.752210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.752542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.752554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.752811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.752824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.753057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.753070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.753342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.753354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.753614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.753627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.753852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.753864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.754147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.754159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.754465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.754477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.754791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.754804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.755103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.755115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.755459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.755471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.755769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.755783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.756018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.756031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.756210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.756222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.756493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.756505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.756810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.756824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.756989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.757001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.757340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.757352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.757588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.757601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.757831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.757843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.758003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.758015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.758342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.758354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.758704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.758716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.759000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.759012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.759265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.759277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.759533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.759546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.759831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.759844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.760075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.760088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.760315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.760327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.760634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.760647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.760880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.760892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.966 [2024-06-10 11:49:04.761145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.966 [2024-06-10 11:49:04.761157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.966 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.761409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.761421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.761589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.761602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.761840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.761852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.762089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.762101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.762402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.762414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.762661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.762673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.762930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.762941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.763180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.763192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.763438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.763452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.763711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.763724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.764034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.764046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.764262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.764275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.764507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.764519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.764805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.764817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.764994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.765007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.765310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.765323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.765555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.765567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.765814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.765826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.766107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.766120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.766446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.766458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.766799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.766811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.767105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.767118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.767420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.767432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.767775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.767788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.768083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.768095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.768417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.768429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.768736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.768749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.768988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.769001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.769302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.769314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.769572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.769589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.769799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.769811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.770042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.770056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.770360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.770373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.770693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.770708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.770981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.770993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.771170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.771182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.771525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.771537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.967 [2024-06-10 11:49:04.771801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.967 [2024-06-10 11:49:04.771813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.967 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.771979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.771991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.772276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.772288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.772615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.772627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.772841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.772853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.773161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.773173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.773458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.773471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.773759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.773772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.774025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.774037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.774249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.774261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.774592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.774604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.774883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.774895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.775140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.775151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.775434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.775447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.775736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.775749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.776031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.776043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.776326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.776339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.776621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.776634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.776851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.776863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.777027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.777039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.777352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.777364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.777597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.777610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.777842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.777855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.778090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.778104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.778373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.778386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.778706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.778718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.778957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.778970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.779302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.779314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.779626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.779639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.779868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.779880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.968 [2024-06-10 11:49:04.780187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.968 [2024-06-10 11:49:04.780199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.968 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.780411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.780423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.780690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.780702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.780888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.780901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.781209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.781222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.781496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.781509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.781816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.781831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.782093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.782106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.782421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.782433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.782782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.782795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.783027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.783041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.783334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.783346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.783561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.783574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.783814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.783826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.784126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.784138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.784364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.784377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.784668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.784680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.784914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.784926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.785204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.785217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.785460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.785472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.785768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.785781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.786098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.786110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.786342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.786354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.786678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.786691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.787001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.787013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.787313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.787325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.787556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.787568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.787732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.787746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.787990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.788002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.788234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.788246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.788483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.788495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.788791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.788803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.789083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.789096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.789400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.789415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.789716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.789728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.789981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.969 [2024-06-10 11:49:04.789993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.969 qpair failed and we were unable to recover it. 00:40:39.969 [2024-06-10 11:49:04.790297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.790309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.790520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.790533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.790826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.790839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.791079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.791092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.791401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.791414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.791732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.791744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.792024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.792036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.792368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.792380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.792689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.792702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.793032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.793044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.793223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.793237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.793477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.793489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.793798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.793811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.794025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.794037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.794322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.794334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.794637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.794650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.794968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.794981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.795200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.795212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.795492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.795504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.795735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.795747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.795988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.796000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.796238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.796250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.796510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.796522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.796848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.796860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.797127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.797140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.797447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.797459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.797783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.797795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.798076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.798088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.798415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.798427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.798760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.970 [2024-06-10 11:49:04.798773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.970 qpair failed and we were unable to recover it. 00:40:39.970 [2024-06-10 11:49:04.799020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.799032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.799314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.799326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.799562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.799585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.799868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.799880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.800135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.800147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.800450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.800463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.800783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.800796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.801147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.801159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.801441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.801454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.801766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.801778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.802103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.802116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.802456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.802469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.802724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.802737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.803052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.803065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.803299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.803312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.803596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.803609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.803892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.803905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.804091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.804103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.804385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.804398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.804732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.804744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.804989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.805004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.805333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.805346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.805648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.805661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.805906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.805918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.806215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.806227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.806529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.806541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.806824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.806837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.807076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.807088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.807386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.807398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.807680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.807693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.807927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.807940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.808177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.808189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.808496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.808508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.808773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.808786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.809032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.809045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.809347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.809359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.809528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.809540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.809846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.809859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.971 qpair failed and we were unable to recover it. 00:40:39.971 [2024-06-10 11:49:04.810114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.971 [2024-06-10 11:49:04.810126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.810442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.810454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.810736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.810749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.811052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.811064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.811361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.811373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.811606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.811618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.811808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.811820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.812049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.812062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.812378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.812390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.812615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.812627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.812926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.812938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.813082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.813094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.813340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.813352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.813631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.813643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.813941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.813953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.814189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.814202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.814448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.814461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.814695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.814707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.814933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.814945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.815187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.815199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.815428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.815441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.815688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.815700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.816027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.816041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.816206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.816218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.816522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.816535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.816775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.816787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.817070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.817083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.817409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.817422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.817712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.817724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.817948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.817960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.818284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.818296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.818526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.818538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.818756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.818768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.818943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.818955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.819108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.819122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.819463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.819475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.819724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.819736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.819968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.819980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.972 [2024-06-10 11:49:04.820194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.972 [2024-06-10 11:49:04.820206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.972 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.820487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.820500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.820806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.820818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.821061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.821073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.821337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.821350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.821585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.821597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.821841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.821853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.822084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.822097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.822250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.822261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.822562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.822585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.822818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.822830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.823006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.823018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.823173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.823185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.823414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.823426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.823591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.823604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.823831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.823843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.824017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.824029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.824261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.824273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.824582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.824595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.824854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.824866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.825150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.825162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.825412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.825424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.825730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.825743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.825968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.825981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.826244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.826259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.826542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.826555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.826855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.826867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.827043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.827056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.827232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.827243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.827392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.827404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.827699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.827719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.827907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.827919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.973 [2024-06-10 11:49:04.828097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.973 [2024-06-10 11:49:04.828109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.973 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.828369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.828381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.828639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.828652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.828867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.828880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.829138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.829150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.829398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.829410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.829723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.829735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.829963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.829975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.830289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.830301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.830599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.830612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.830796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.830808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.830976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.830989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.831277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.831289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.831551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.831564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.831780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.831793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.831941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.831954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.832276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.832288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.832508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.832520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.832700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.832714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.832954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.832968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.833141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.833154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.833460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.833472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.833656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.833668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.833833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.833845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.834083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.834100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.834401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.834413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.834648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.834661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.834890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.834902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.835133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.835145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.835241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.835252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.835438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.835451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.835683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.835696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.835858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.835873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.836154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.836167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.836355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.974 [2024-06-10 11:49:04.836367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.974 qpair failed and we were unable to recover it. 00:40:39.974 [2024-06-10 11:49:04.836592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.836605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.836888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.836900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.837067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.837079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.837261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.837273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.837500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.837512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.837725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.837737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.837957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.837969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.838199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.838211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.838514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.838526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.838762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.838774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.838942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.838954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.839212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.839225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.839397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.839409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.839582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.839594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.839712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.839724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.839882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.839894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.840051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.840063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.840229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.840242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.840499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.840511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.840779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.840791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.841010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.841022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.841240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.841252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.841449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.841461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.841699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.841711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.841999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.842040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.842287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.842308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.842623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.842644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.842885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.842905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4878000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.843081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.843095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.843255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.843267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.843426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.843438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.843682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.975 [2024-06-10 11:49:04.843695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.975 qpair failed and we were unable to recover it. 00:40:39.975 [2024-06-10 11:49:04.843870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.843882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.844093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.844105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.844351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.844363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.844527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.844539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.844757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.844769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.844950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.844965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.845215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.845228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.845408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.845421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.845674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.845687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.845893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.845906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.846197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.846209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.846435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.846447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.846624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.846636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.846800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.846812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.847026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.847038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.847343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.847355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.847530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.847542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.847769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.847781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.847933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.847945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.848252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.848265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.848507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.848519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.848687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.848700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.848984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.848996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.849224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.849236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.849473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.849485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.849646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.849659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.849818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.849831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.850064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.850077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.850235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.850247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.850524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.850537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.850699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.850711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.850875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.850887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.851040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.851052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.851281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.851293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.851527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.851539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.851716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.851729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.852033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.852045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.852276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.852289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.976 qpair failed and we were unable to recover it. 00:40:39.976 [2024-06-10 11:49:04.852509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.976 [2024-06-10 11:49:04.852522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.852749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.852761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.852929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.852941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.853166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.853178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.853336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.853348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.853586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.853598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.853777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.853789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.853933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.853947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.854165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.854177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.854399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.854412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.854641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.854654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.854885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.854898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.855134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.855146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.855378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.855390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.855541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.855553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.855774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.855787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.856015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.856027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.856227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.856239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.856405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.856417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.856698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.856711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.856964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.856975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.857146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.857158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.857377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.857390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.857620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.857632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.857845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.857857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.858023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.858035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.858312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.858324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.858480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.858492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.858668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.858680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.858915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.858927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.859031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.859043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.859272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.859284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.859513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.859525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.859835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.859848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.860006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.860018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.860281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.860293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.860514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.860526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.860751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.860764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.977 [2024-06-10 11:49:04.860909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.977 [2024-06-10 11:49:04.860921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.977 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.861169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.861181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.861344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.861357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.861572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.861589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.861881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.861893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.862053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.862065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.862236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.862248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.862397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.862409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.862627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.862639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.862822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.862836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.862998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.863010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.863227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.863239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.863486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.863498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.863661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.863674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.863838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.863850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.864069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.864081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.864313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.864325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.864585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.864598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.864826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.864840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.865070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.865082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.865296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.865308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.865478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.865490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.865709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.865721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.866008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.866021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.866192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.866205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.866425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.866437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.866662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.866675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.866908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.866921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.867080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.867092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.867353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.867366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.867599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.867611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.867933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.867945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.868248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.868260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.868496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.868509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.868683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.868696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.868911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.868924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.869078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.869092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.869259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.978 [2024-06-10 11:49:04.869272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.978 qpair failed and we were unable to recover it. 00:40:39.978 [2024-06-10 11:49:04.869584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.869596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.869879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.869891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.870198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.870210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.870505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.870517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.870743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.870755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.871020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.871033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.871254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.871266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.871506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.871518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.871739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.871751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.871996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.872008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.872164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.872176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.872353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.872365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.872467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.872479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.872691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.872704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.872806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.872817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.872987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.872999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.873219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.873231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.873514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.873527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.873808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.873820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.874009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.874021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.874253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.874265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.874428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.874440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.874605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.874617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.874791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.874803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.874965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.874977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.875215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.875228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.875463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.875475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.875636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.875648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.875809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.875821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.876041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.876053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.876212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.876224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.876443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.876455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.876677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.876690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.876972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.876984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.877313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.877325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.877557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.877569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.979 qpair failed and we were unable to recover it. 00:40:39.979 [2024-06-10 11:49:04.877757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.979 [2024-06-10 11:49:04.877769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.877919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.877931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.878106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.878120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.878274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.878286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.878508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.878520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.878765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.878778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.878945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.878957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.879180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.879192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.879447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.879459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.879696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.879708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.879920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.879932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.880166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.880178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.880339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.880352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.880573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.880591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.880709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.880721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.880942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.880954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.881109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.881121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.881347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.881360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.881584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.881596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.881759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.881771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.881997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.882009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.882243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.882255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.882564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.882591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.882817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.882830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.883061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.883073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.883222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.883234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.883395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.883407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.883561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.883572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.883859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.883871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.884090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.884103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.884384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.884396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.884625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.884637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.884814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.884826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.884977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.884991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.885147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.980 [2024-06-10 11:49:04.885158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.980 qpair failed and we were unable to recover it. 00:40:39.980 [2024-06-10 11:49:04.885382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.885393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.885628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.885640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.885850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.885862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.886143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.886155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.886333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.886345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.886518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.886531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.886747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.886761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.886977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.886993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.887238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.887250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.887488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.887500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.887718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.887730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.887899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.887911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.888193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.888205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.888425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.888439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.888677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.888690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.888851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.888864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.889096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.889109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.889414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.889427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.889658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.889670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.889927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.889939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.890120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.890132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.890282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.890294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.890460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.890472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.890636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.890649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.890791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.890803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.891025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.891036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.891249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.891262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.891443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.891454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.891607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.891619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.891839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.891851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.892009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.892021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.892183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.892195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.892407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.892419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.892591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.892603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.892887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.892899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.893064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.893076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.893173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.893185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.893427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.981 [2024-06-10 11:49:04.893439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.981 qpair failed and we were unable to recover it. 00:40:39.981 [2024-06-10 11:49:04.893611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.893624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.893867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.893879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.894187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.894199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.894367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.894379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.894616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.894628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.894787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.894799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.894974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.894986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.895160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.895172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.895323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.895335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.895476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.895490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.895682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.895695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.895980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.895992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.896165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.896177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.896321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.896333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.896559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.896572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.896756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.896769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.896983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.896995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.897236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.897248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.897405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.897416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.897663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.897675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.897835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.897847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.898005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.898017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.898188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.898200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.898362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.898374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.898592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.898612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.898827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.898840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.898991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.899003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.899237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.899249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.899518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.899530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.899687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.899700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.899855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.899867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.900026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.900038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.900249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.900263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.900447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.900459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.900688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.900700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.900919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.900932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.901156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.901169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.901317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.901329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.982 qpair failed and we were unable to recover it. 00:40:39.982 [2024-06-10 11:49:04.901551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.982 [2024-06-10 11:49:04.901563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.901730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.901743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.901971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.901983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.902193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.902206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.902319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.902331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.902557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.902569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.902807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.902819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.903036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.903049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.903221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.903233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.903445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.903457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.903667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.903680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.903840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.903854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.904175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.904187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.904438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.904450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.904698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.904712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.904991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.905003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.905288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.905301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.905457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.905470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.905642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.905655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.905878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.905890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.906113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.906125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.906425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.906438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.906596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.906610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.906843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.906855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.907000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.907013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.907241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.907253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.907429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.907442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.907601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.907614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.907795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.907807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.908092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.908104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.908274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.908286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.908533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.908546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.908760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.908773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.908971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.908983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.909283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.909296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.909518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.909530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.909833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.909846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.910007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.910019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.983 [2024-06-10 11:49:04.910302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.983 [2024-06-10 11:49:04.910314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.983 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.910475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.910488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.910653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.910666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.910952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.910964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.911218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.911231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.911450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.911463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.911616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.911628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.911781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.911793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.912136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.912148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.912441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.912453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.912734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.912747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.912916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.912928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.913232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.913244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.913458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.913473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.913638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.913651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.913829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.913842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.914007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.914019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.914194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.914206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.914487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.914499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.914731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.914743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.915030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.915042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.915344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.915356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.915599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.915612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.915866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.915879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.916186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.916198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.916468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.916481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.916714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.916726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.917012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.917025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.917196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.917209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.917361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.917373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.917618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.984 [2024-06-10 11:49:04.917631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.984 qpair failed and we were unable to recover it. 00:40:39.984 [2024-06-10 11:49:04.917938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.917950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.918115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.918127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.918295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.918309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.918523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.918535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.918760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.918773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.918931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.918944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.919195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.919207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.919423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.919436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.919585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.919598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.919754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.919767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.919980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.919992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.920211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.920224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.920436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.920448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.920685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.920697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.920875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.920887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.921068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.921081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.921228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.921240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.921395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.921407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.921630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.921643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.921869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.921882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.922202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.922214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.922429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.922442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.922595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.922610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.922843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.922856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.923034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.923046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.923272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.923284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.923459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.923471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.923630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.923643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.923864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.923876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.924156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.985 [2024-06-10 11:49:04.924169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.985 qpair failed and we were unable to recover it. 00:40:39.985 [2024-06-10 11:49:04.924403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.924415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.924574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.924593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.924769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.924781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.924937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.924949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.925231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.925244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.925468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.925481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.925646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.925659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.925829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.925842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.926113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.926125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.926417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.926430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.926597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.926609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.926913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.926926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.927092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.927105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.927272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.927284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.927493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.927506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.927729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.927741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.927827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.927839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.928136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.928148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.928495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.928508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.928677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.928690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.928927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.928939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.929152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.929164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.929446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.929460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.929629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.929641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.929812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.929824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.929994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.930006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.930285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.930298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.930467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.930479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.930719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.930732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.931036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.931049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.931221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.931234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.931396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.931408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.931631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.931674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.931908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.931921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.932138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.932150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.932311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.932323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.932614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.932626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.986 [2024-06-10 11:49:04.932854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.986 [2024-06-10 11:49:04.932867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.986 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.933093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.933106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.933252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.933265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.933512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.933525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.933805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.933818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.933988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.934001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.934229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.934241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.934453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.934465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.934699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.934712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.934930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.934943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.935169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.935181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.935429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.935442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.935680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.935693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.935869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.935881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.936096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.936108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.936415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.936427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.936726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.936739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.936994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.937006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.937178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.937190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.937401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.937414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.937651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.937664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.937878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.937890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.938147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.938159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.938456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.938468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.938635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.938647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.938876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.938889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.939188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.939201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.939438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.939451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.939619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.939632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.939914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.939926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.940147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.940159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.940387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.940400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.940682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.940696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.940860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.940873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.941039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.941051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.941342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.941356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.941528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.941540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.941780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.987 [2024-06-10 11:49:04.941793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.987 qpair failed and we were unable to recover it. 00:40:39.987 [2024-06-10 11:49:04.942009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.942021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.942172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.942184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.942408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.942422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.942601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.942614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.942867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.942880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.943117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.943129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.943312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.943324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.943551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.943565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.943738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.943751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.943938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.943950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.944049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.944062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.944234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.944246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.944528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.944541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.944711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.944724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.944957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.944969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.945185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.945198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.945412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.945424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.945655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.945667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.945958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.945970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.946166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.946178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.946362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.946374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.946589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.946602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.946906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.946918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.947165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.947177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.947364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.947377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.947601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.947614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.947875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.947888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.948096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.948108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.948371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.948384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.948610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.948623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.948775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.948788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.948952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.948965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.949137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.949149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.949415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.949427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.949657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.949670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.949816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.949828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.988 [2024-06-10 11:49:04.950067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.988 [2024-06-10 11:49:04.950079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.988 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.950370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.950385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.950530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.950543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.950762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.950775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.950984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.950996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.951189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.951202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.951443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.951455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.951759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.951772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.951990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.952003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.952165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.952178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.952345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.952358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.952599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.952612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.952776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.952788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.952988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.953000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.953142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.953155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.953476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.953489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.953768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.953781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.953928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.953941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.954150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.954163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.954409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.954421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.954585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.954598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.954754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.954767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.955003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.955016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.955299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.955311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.955477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.955489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.955669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.955681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.955922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.955935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.956082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.956095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.956332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.956345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.956626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.956639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.956951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.956963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.989 [2024-06-10 11:49:04.957063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.989 [2024-06-10 11:49:04.957075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.989 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.957220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.957232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.957513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.957526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.957692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.957704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.957953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.957966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.958126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.958138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.958314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.958326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.958609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.958621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.958854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.958866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.959009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.959022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.959194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.959209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.959470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.959483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.959702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.959715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.959869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.959882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.960096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.960109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.960334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.960347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.960640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.960652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.960865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.960877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.961205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.961217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.961370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.961383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.961561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.961573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.961834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.961847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.962129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.962141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.962368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.962380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.962596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.962609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.962828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.962840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.963149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.963161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.963415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.963427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.963723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.963736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.963986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.963999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.964169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.964181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.964473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.964485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.964704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.964717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.965013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.990 [2024-06-10 11:49:04.965026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.990 qpair failed and we were unable to recover it. 00:40:39.990 [2024-06-10 11:49:04.965259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.965271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.965597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.965610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.965860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.965872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.966136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.966149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.966455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.966466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.966683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.966696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.966890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.966902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.967129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.967142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.967397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.967411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.967718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.967731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.967968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.967980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.968242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.968255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.968554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.968569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.968905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.968918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.969153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.969165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.969389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.969401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.969676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.969691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.969865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.969877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.970112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.970124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.970415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.970427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.970676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.970689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.970873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.970885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.971105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.971117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.971370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.971382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.971656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.971670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.971911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.971924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.972109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.991 [2024-06-10 11:49:04.972121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.991 qpair failed and we were unable to recover it. 00:40:39.991 [2024-06-10 11:49:04.972286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.972305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.972458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.972471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.972686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.972699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.972935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.972947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.973185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.973198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.973357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.973369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.973651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.973664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.973818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.973830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.974057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.974070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.974241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.974253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.974478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.974490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.974796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.974809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.974978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.974990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.975218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.975230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.975461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.975473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.975635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.975647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.975810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.975823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.976039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.976051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.976358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.976370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.976547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.976559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.976779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.976791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.976950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.976962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.977191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.977203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.977370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.977382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.977559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.977572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.977816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.977829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.977976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.977988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.992 qpair failed and we were unable to recover it. 00:40:39.992 [2024-06-10 11:49:04.978207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.992 [2024-06-10 11:49:04.978220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.978353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.978368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.978537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.978553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.978785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.978799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.978950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.978962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.979184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.979196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.979358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.979370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.979514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.979526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.979676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.979688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.979864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.979877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.980105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.980117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.980350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.980363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.980646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.980659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.980804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.980817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.981030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.981042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.981323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.981336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.981579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.981592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.981897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.981910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.982071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.982083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.982296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.982308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.982466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.982479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.982695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.982708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.982943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.982955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.983188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.983201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.983376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.983388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.983672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.983685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.983901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.983913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.993 [2024-06-10 11:49:04.984194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.993 [2024-06-10 11:49:04.984207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.993 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.984440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.984452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.984629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.984644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.984805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.984818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.984969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.984981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.985160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.985172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.985384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.985396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.985546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.985559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.985730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.985742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.985899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.985911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.986065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.986077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.986231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.986243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.986467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.986479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.986638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.986651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.986768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.986780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.987001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.987014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.987171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.987184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.987415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.987428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.987585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.987598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.987816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.987828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.987994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.988006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.988235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.988247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.988395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.988407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.988571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.988598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.988748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.988760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.988903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.988916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.989076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.989089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.994 qpair failed and we were unable to recover it. 00:40:39.994 [2024-06-10 11:49:04.989245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.994 [2024-06-10 11:49:04.989257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.989405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.989417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.989595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.989608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.989769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.989783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.989901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.989913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.990075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.990088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.990259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.990272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.990446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.990459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.990746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.990760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.990977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.990990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.991206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.991218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.991434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.991447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.991603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.991615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.991770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.991783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.992096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.992109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.992341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.992356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.992571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.992589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.992802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.992814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.993040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.993052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.993210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.993223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.995 qpair failed and we were unable to recover it. 00:40:39.995 [2024-06-10 11:49:04.993369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.995 [2024-06-10 11:49:04.993381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.993620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.993633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.993804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.993817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.993966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.993978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.994227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.994240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.994454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.994466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.994613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.994627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:39.996 [2024-06-10 11:49:04.994851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.994865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.994953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.994966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:40:39.996 [2024-06-10 11:49:04.995113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.995126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:39.996 [2024-06-10 11:49:04.995407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.995421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:39.996 [2024-06-10 11:49:04.995701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.995715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.995877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.995889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 11:49:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:39.996 [2024-06-10 11:49:04.996050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.996064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.996220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.996233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.996446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.996458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.996688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.996702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.996931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.996944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.997118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.997130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.997349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.997363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.997535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.997548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.997696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.997709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.997935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.997948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.998224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.998238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.998473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.998486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.998700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.998713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.998897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.998910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.999142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.999154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.999379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.999392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.999551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.999564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.999743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.999755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:04.999978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:04.999990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.996 [2024-06-10 11:49:05.000169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.996 [2024-06-10 11:49:05.000181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.996 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.000333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.000348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.000563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.000581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.000685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.000697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.000937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.000950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.001173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.001186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.001491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.001503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.001730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.001742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.001994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.002007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.002165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.002178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.002452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.002464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.002644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.002656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.002873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.002885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.003111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.003123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.003213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.003224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.003372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.003384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.003541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.003553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.003726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.003739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.003896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.003909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.004126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.004138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.004304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.004316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.004477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.004489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.004661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.004674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.005004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.005016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.005184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.005196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.005439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.005451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.005642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.005654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.005825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.005837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.006006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.006018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.006190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.006203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.006365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.006378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.006594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.006607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.006776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.006788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.006941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.006953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.007102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.007114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.007268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.007280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.007429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.007441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.007679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.997 [2024-06-10 11:49:05.007691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.997 qpair failed and we were unable to recover it. 00:40:39.997 [2024-06-10 11:49:05.007860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.007872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.008101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.008113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.008365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.008378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.008529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.008543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.008768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.008781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.008933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.008944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.009166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.009179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.009395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.009407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.009629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.009642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.009789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.009802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.010050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.010063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.010319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.010332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.010542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.010554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.010837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.010850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.011025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.011038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.011271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.011283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.011565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.011587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.011749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.011761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.011986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.011999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.012163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.012175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.012410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.012422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.012567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.012585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.012808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.012821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.012985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.012997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.013295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.013307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.013466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.013479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.013697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.013709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.014050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.014063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.014221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.014233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.014443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.014455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.014673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.014686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.014975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.014987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.015168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.015180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.015349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.015361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.015588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.015600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.015816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.015829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.016157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.016169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.016449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.998 [2024-06-10 11:49:05.016461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.998 qpair failed and we were unable to recover it. 00:40:39.998 [2024-06-10 11:49:05.016741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.016753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.016990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.017003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.017241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.017253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.017516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.017529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.017769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.017782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.017971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.017986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.018151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.018163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.018428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.018441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.018618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.018630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.018864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.018876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.019053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.019065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.019335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.019347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.019584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.019597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.019812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.019825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.020079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.020091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.020294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.020306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.020557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.020569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.020819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.020831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.021010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.021025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.021257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.021269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.021507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.021520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.021628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.021640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.021828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.021840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.022016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.022028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.022262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.022275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.022625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.022638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.022865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.022877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.023062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.023075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.023401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.023414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.023607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.023620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.023851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.023863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.024097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.024110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.024296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.024308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.024537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.024549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.024795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.024808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.024985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.024998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.025170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.025182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.025366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.025378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:39.999 qpair failed and we were unable to recover it. 00:40:39.999 [2024-06-10 11:49:05.025560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:39.999 [2024-06-10 11:49:05.025572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.025836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.025850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.026074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.026087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.026253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.026266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.026461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.026474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.026700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.026713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.026811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.026822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.027058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.027074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.027251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.027264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.027501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.027513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.027741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.027754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.027936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.027949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.028119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.028132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.028356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.028369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.028522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.028534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.028763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.028776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.028976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.028988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.029264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.029276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.029446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.029460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.029635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.029648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.029805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.029817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.030001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.030014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.030172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.030184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.030340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.030353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.030510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.030522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.030737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.030750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.030920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.030932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.031147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.031159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.031375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.031387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.031570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.031595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.031763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.031775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.031865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.031876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.000 [2024-06-10 11:49:05.032095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.000 [2024-06-10 11:49:05.032107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.000 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.032335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.032347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.032515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.032527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.032742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.032755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.032967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.032980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.033130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.033143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.033370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.033383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.033545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.033558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.033725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.033738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.033897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.033910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.034068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.034080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.034246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.034260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.034440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.034453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.034608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.034623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.034803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.034815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.034925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.034939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.035171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.035183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.035343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.035355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.035553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.035567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.035802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.035815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.035993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.036005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.036162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.036177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.036327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.036343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.036503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.036517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.036681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.036696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.036850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.036866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.036968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.036982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.037203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.037217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.037459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.037471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.037629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.037642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.037816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.037828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.037999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.038011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.038165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.038177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.038423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.038436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.038627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.038640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.038800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.038814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.038968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.038979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.039140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.039152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.001 [2024-06-10 11:49:05.039375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.001 [2024-06-10 11:49:05.039387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.001 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.039552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.039564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.039741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.039753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.039907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.039919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.040078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.040091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.040250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.040263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:40.002 [2024-06-10 11:49:05.040497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.040511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.040678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.040692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:40.002 [2024-06-10 11:49:05.040858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.040871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:40.002 [2024-06-10 11:49:05.041109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.041123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:40.002 [2024-06-10 11:49:05.041332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.041345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.041490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.041503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.041666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.041679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.041915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.041927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.042090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.042102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.042332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.042345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.042595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.042608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.042826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.042838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.043092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.043104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.043272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.043285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.043573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.043591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.043876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.043888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.044048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.044060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.044219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.044231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.044400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.044412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.044650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.044663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.044848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.044861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.045023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.045035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.045134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.045145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.045365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.045377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.045550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.045562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.045877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.045890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.046108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.046121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.046277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.046289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.046469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.046481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.046651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.046664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.046838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.046851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.047122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.047135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.047281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.002 [2024-06-10 11:49:05.047293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.002 qpair failed and we were unable to recover it. 00:40:40.002 [2024-06-10 11:49:05.047453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.047465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.047712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.047724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.047878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.047890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.048122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.048136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.048285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.048298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.048443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.048456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.048601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.048613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.048759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.048771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.049024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.049036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.049200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.049213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.049428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.049441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.049615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.049628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.049785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.049798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.050020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.050032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.050181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.050193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.050416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.050428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.050594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.050607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.050756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.050768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.050919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.050932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.003 [2024-06-10 11:49:05.051151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.003 [2024-06-10 11:49:05.051164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.003 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.051285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.051298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.051527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.051541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.051690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.051703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.051872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.051885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.052056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.052069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.052241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.052254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.052471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.052484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.052648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.052661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.052810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.052823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.052916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.052929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.053079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.053092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.053315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.053329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.053431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.053443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.053679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.053693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.053843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.053856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.054030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.054043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.054201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.054214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.054389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.054401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.054622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.054636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.054785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.054798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.055084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.055098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.055251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.055265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.055502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.055517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.055684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.055700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.055980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.055994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.056166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.056180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.266 [2024-06-10 11:49:05.056345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.266 [2024-06-10 11:49:05.056359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.266 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.056651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.056665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.056822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.056835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.056999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.057012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.057282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.057295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.057523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.057535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.057865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.057878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.058044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.058057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.058277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.058289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.058468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.058481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.058631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.058644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.058751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.058763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.058921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.058933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.059216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.059229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.059382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.059394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.059570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.059588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.059801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.059813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.059973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.059985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.060146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.060159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.060449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.060462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 Malloc0 00:40:40.267 [2024-06-10 11:49:05.060630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.060643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.060860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.060873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.061046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.061058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.061287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.061300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:40.267 [2024-06-10 11:49:05.061472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.061485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.061702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.061715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:40:40.267 [2024-06-10 11:49:05.061877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.061889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.062048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:40.267 [2024-06-10 11:49:05.062061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.062314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:40.267 [2024-06-10 11:49:05.062327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.062477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.062489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.062705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.062717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.063052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.063065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.063242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.063254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.063483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.063496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.063739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.063751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.063966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.063978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.064078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.267 [2024-06-10 11:49:05.064090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.267 qpair failed and we were unable to recover it. 00:40:40.267 [2024-06-10 11:49:05.064237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.064250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.064462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.064475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.064785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.064797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.064957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.064969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.065221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.065234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.065465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.065477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.065593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.065605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.065714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.065726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.065881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.065893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.066195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.066208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.066445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.066457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.066607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.066620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.066840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.066853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.067091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.067104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.067279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.067291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.067523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.067535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.067713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.067726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.067941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.067954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.068042] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:40.268 [2024-06-10 11:49:05.068111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.068124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.068351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.068363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.068524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.068536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.068761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.068773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.069003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.069014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.069161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.069173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.069317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.069329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.069434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.069446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.069619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.069631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.069785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.069798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.070009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.070023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.070308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.070321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.070544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.070556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.070842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.070854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.071012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.071026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.071355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.071367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.071532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.071544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.071722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.071734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.071882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.071895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.072090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.268 [2024-06-10 11:49:05.072103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.268 qpair failed and we were unable to recover it. 00:40:40.268 [2024-06-10 11:49:05.072363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.072378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.072596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.072609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.072844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.072857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.073138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.073151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.073383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.073396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.073611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.073624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.073771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.073783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.074021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.074034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.074250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.074262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.074493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.074506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.074676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.074689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.074919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.074932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.075099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.075111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.075269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.075281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.075484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.075496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.075660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.075673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.075900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.075912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.076132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.076144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.076375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.076388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.076600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.076613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:40.269 [2024-06-10 11:49:05.076781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.076796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.076955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.076967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:40.269 [2024-06-10 11:49:05.077179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.077191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.077414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:40.269 [2024-06-10 11:49:05.077427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:40.269 [2024-06-10 11:49:05.077717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.077730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.077963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.077977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.078137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.078149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.078376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.078388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.078568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.078603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.078838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.078850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.079142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.079155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.079381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.079394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.079586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.079598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.079813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.079825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.080052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.080065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.080279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.269 [2024-06-10 11:49:05.080291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.269 qpair failed and we were unable to recover it. 00:40:40.269 [2024-06-10 11:49:05.080526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.080539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.080692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.080705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.080879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.080891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.081199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.081211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.081438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.081450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.081751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.081764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.081998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.082011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.082251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.082263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.082503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.082515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.082809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.082821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.083122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.083135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.083366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.083379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.083495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.083508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.083673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.083686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.083846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.083858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.084074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.084087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.084267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.084280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.084506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.084518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:40.270 [2024-06-10 11:49:05.084824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.084837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.084994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.085007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:40.270 [2024-06-10 11:49:05.085241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.085254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:40.270 [2024-06-10 11:49:05.085482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.085495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.085727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.085739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.085900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.085912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.086069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.086081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.086407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.086420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.086598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.086611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.086937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.086949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.087115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.087128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.270 qpair failed and we were unable to recover it. 00:40:40.270 [2024-06-10 11:49:05.087345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.270 [2024-06-10 11:49:05.087357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.087638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.087650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.087802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.087814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.088065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.088077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.088289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.088301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.088584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.088596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.088785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.088797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.089120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.089132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.089292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.089304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.089588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.089600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.089823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.089836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.090062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.090075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.090289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.090301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.090638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.090651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.090886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.090898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.091203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.091216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.091403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.091415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.091646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.091658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.091872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.091884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.092108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.092120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.092334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.092346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.092562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.092574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.092682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.092693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:40.271 [2024-06-10 11:49:05.093013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.093026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:40.271 [2024-06-10 11:49:05.093261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.093276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:40.271 [2024-06-10 11:49:05.093584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.093597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:40.271 [2024-06-10 11:49:05.093832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.093844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.094053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.094065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.094292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.094304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.094483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.094495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.094778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.094790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.095074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.095087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.095313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.095326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.095536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.095548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.095777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.095790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.096022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.096034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.271 [2024-06-10 11:49:05.096260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:40.271 [2024-06-10 11:49:05.096272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4870000b90 with addr=10.0.0.2, port=4420 00:40:40.271 qpair failed and we were unable to recover it. 00:40:40.272 [2024-06-10 11:49:05.096295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:40.272 [2024-06-10 11:49:05.098702] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.272 [2024-06-10 11:49:05.098812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.272 [2024-06-10 11:49:05.098834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.272 [2024-06-10 11:49:05.098844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.272 [2024-06-10 11:49:05.098853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.272 [2024-06-10 11:49:05.098877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.272 qpair failed and we were unable to recover it. 00:40:40.272 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:40.272 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:40.272 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:40.272 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:40.272 [2024-06-10 11:49:05.108632] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.272 [2024-06-10 11:49:05.108733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.272 [2024-06-10 11:49:05.108751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.272 [2024-06-10 11:49:05.108761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.272 [2024-06-10 11:49:05.108770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.272 [2024-06-10 11:49:05.108789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.272 qpair failed and we were unable to recover it. 00:40:40.272 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:40.272 11:49:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 4176737 00:40:40.272 [2024-06-10 11:49:05.118683] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.272 [2024-06-10 11:49:05.118773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.272 [2024-06-10 11:49:05.118791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.272 [2024-06-10 11:49:05.118801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.272 [2024-06-10 11:49:05.118809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.272 [2024-06-10 11:49:05.118828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.272 qpair failed and we were unable to recover it. 00:40:40.272 [2024-06-10 11:49:05.128599] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.272 [2024-06-10 11:49:05.128687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.272 [2024-06-10 11:49:05.128705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.272 [2024-06-10 11:49:05.128719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.272 [2024-06-10 11:49:05.128728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.272 [2024-06-10 11:49:05.128746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.272 qpair failed and we were unable to recover it. 00:40:40.272 [2024-06-10 11:49:05.138651] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.272 [2024-06-10 11:49:05.138740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.272 [2024-06-10 11:49:05.138760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.272 [2024-06-10 11:49:05.138770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.272 [2024-06-10 11:49:05.138778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.272 [2024-06-10 11:49:05.138797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.272 qpair failed and we were unable to recover it. 00:40:40.272 [2024-06-10 11:49:05.148631] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.272 [2024-06-10 11:49:05.148721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.272 [2024-06-10 11:49:05.148739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.272 [2024-06-10 11:49:05.148748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.272 [2024-06-10 11:49:05.148757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.272 [2024-06-10 11:49:05.148776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.272 qpair failed and we were unable to recover it. 00:40:40.272 [2024-06-10 11:49:05.158689] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.272 [2024-06-10 11:49:05.158783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.272 [2024-06-10 11:49:05.158801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.272 [2024-06-10 11:49:05.158810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.272 [2024-06-10 11:49:05.158818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.272 [2024-06-10 11:49:05.158836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.272 qpair failed and we were unable to recover it. 00:40:40.272 [2024-06-10 11:49:05.168729] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.272 [2024-06-10 11:49:05.168828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.272 [2024-06-10 11:49:05.168846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.272 [2024-06-10 11:49:05.168855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.272 [2024-06-10 11:49:05.168864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.272 [2024-06-10 11:49:05.168883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.272 qpair failed and we were unable to recover it. 00:40:40.272 [2024-06-10 11:49:05.178747] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.272 [2024-06-10 11:49:05.178836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.272 [2024-06-10 11:49:05.178854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.272 [2024-06-10 11:49:05.178863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.272 [2024-06-10 11:49:05.178871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.272 [2024-06-10 11:49:05.178889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.272 qpair failed and we were unable to recover it. 00:40:40.272 [2024-06-10 11:49:05.188709] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.272 [2024-06-10 11:49:05.188797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.272 [2024-06-10 11:49:05.188815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.272 [2024-06-10 11:49:05.188825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.272 [2024-06-10 11:49:05.188833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.272 [2024-06-10 11:49:05.188852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.272 qpair failed and we were unable to recover it. 00:40:40.272 [2024-06-10 11:49:05.198795] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.272 [2024-06-10 11:49:05.198877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.272 [2024-06-10 11:49:05.198895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.272 [2024-06-10 11:49:05.198904] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.272 [2024-06-10 11:49:05.198912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.272 [2024-06-10 11:49:05.198930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.272 qpair failed and we were unable to recover it. 00:40:40.272 [2024-06-10 11:49:05.208845] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.272 [2024-06-10 11:49:05.208939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.272 [2024-06-10 11:49:05.208957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.272 [2024-06-10 11:49:05.208966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.272 [2024-06-10 11:49:05.208975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.272 [2024-06-10 11:49:05.208993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.272 qpair failed and we were unable to recover it. 00:40:40.272 [2024-06-10 11:49:05.218900] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.272 [2024-06-10 11:49:05.219005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.272 [2024-06-10 11:49:05.219026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.219036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.219044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.219062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.228909] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.273 [2024-06-10 11:49:05.228998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.273 [2024-06-10 11:49:05.229016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.229025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.229035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.229053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.238957] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.273 [2024-06-10 11:49:05.239045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.273 [2024-06-10 11:49:05.239063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.239072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.239080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.239099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.248901] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.273 [2024-06-10 11:49:05.248991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.273 [2024-06-10 11:49:05.249008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.249018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.249026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.249044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.258921] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.273 [2024-06-10 11:49:05.259011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.273 [2024-06-10 11:49:05.259028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.259037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.259046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.259067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.269047] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.273 [2024-06-10 11:49:05.269129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.273 [2024-06-10 11:49:05.269148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.269157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.269166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.269184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.279063] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.273 [2024-06-10 11:49:05.279145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.273 [2024-06-10 11:49:05.279162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.279172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.279180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.279198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.289057] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.273 [2024-06-10 11:49:05.289143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.273 [2024-06-10 11:49:05.289160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.289169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.289178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.289196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.299132] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.273 [2024-06-10 11:49:05.299219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.273 [2024-06-10 11:49:05.299236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.299246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.299254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.299272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.309080] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.273 [2024-06-10 11:49:05.309240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.273 [2024-06-10 11:49:05.309261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.309270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.309279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.309297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.319121] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.273 [2024-06-10 11:49:05.319205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.273 [2024-06-10 11:49:05.319222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.319231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.319239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.319257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.329135] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.273 [2024-06-10 11:49:05.329224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.273 [2024-06-10 11:49:05.329241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.329251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.329259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.329277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.339194] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.273 [2024-06-10 11:49:05.339278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.273 [2024-06-10 11:49:05.339295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.339305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.339313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.339331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.349490] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.273 [2024-06-10 11:49:05.349603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.273 [2024-06-10 11:49:05.349621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.273 [2024-06-10 11:49:05.349630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.273 [2024-06-10 11:49:05.349642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.273 [2024-06-10 11:49:05.349659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.273 qpair failed and we were unable to recover it. 00:40:40.273 [2024-06-10 11:49:05.359407] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.274 [2024-06-10 11:49:05.359495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.274 [2024-06-10 11:49:05.359512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.274 [2024-06-10 11:49:05.359521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.274 [2024-06-10 11:49:05.359530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.274 [2024-06-10 11:49:05.359547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.274 qpair failed and we were unable to recover it. 00:40:40.534 [2024-06-10 11:49:05.369378] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.534 [2024-06-10 11:49:05.369469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.534 [2024-06-10 11:49:05.369486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.534 [2024-06-10 11:49:05.369496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.534 [2024-06-10 11:49:05.369504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.534 [2024-06-10 11:49:05.369522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.534 qpair failed and we were unable to recover it. 00:40:40.534 [2024-06-10 11:49:05.379451] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.534 [2024-06-10 11:49:05.379538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.534 [2024-06-10 11:49:05.379555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.534 [2024-06-10 11:49:05.379564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.534 [2024-06-10 11:49:05.379572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.534 [2024-06-10 11:49:05.379597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.534 qpair failed and we were unable to recover it. 00:40:40.534 [2024-06-10 11:49:05.389442] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.534 [2024-06-10 11:49:05.389521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.534 [2024-06-10 11:49:05.389538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.534 [2024-06-10 11:49:05.389547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.534 [2024-06-10 11:49:05.389556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.534 [2024-06-10 11:49:05.389574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.534 qpair failed and we were unable to recover it. 00:40:40.534 [2024-06-10 11:49:05.399433] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.534 [2024-06-10 11:49:05.399520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.534 [2024-06-10 11:49:05.399538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.534 [2024-06-10 11:49:05.399547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.534 [2024-06-10 11:49:05.399556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.534 [2024-06-10 11:49:05.399574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.534 qpair failed and we were unable to recover it. 00:40:40.534 [2024-06-10 11:49:05.409420] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.534 [2024-06-10 11:49:05.409508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.534 [2024-06-10 11:49:05.409526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.534 [2024-06-10 11:49:05.409535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.534 [2024-06-10 11:49:05.409543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.534 [2024-06-10 11:49:05.409561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.534 qpair failed and we were unable to recover it. 00:40:40.534 [2024-06-10 11:49:05.419510] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.534 [2024-06-10 11:49:05.419607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.534 [2024-06-10 11:49:05.419625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.534 [2024-06-10 11:49:05.419635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.534 [2024-06-10 11:49:05.419643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.534 [2024-06-10 11:49:05.419661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.534 qpair failed and we were unable to recover it. 00:40:40.534 [2024-06-10 11:49:05.429408] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.534 [2024-06-10 11:49:05.429492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.534 [2024-06-10 11:49:05.429510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.534 [2024-06-10 11:49:05.429519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.534 [2024-06-10 11:49:05.429527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.534 [2024-06-10 11:49:05.429545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.534 qpair failed and we were unable to recover it. 00:40:40.534 [2024-06-10 11:49:05.439529] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.534 [2024-06-10 11:49:05.439618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.534 [2024-06-10 11:49:05.439635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.534 [2024-06-10 11:49:05.439645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.534 [2024-06-10 11:49:05.439658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.534 [2024-06-10 11:49:05.439676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.534 qpair failed and we were unable to recover it. 00:40:40.534 [2024-06-10 11:49:05.449545] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.534 [2024-06-10 11:49:05.449643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.449661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.449670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.449678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.449696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.535 [2024-06-10 11:49:05.459600] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.535 [2024-06-10 11:49:05.459690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.459707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.459717] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.459725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.459743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.535 [2024-06-10 11:49:05.469530] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.535 [2024-06-10 11:49:05.469618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.469636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.469645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.469653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.469670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.535 [2024-06-10 11:49:05.479645] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.535 [2024-06-10 11:49:05.479732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.479750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.479759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.479767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.479785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.535 [2024-06-10 11:49:05.489763] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.535 [2024-06-10 11:49:05.489848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.489866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.489875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.489884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.489901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.535 [2024-06-10 11:49:05.499723] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.535 [2024-06-10 11:49:05.499810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.499827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.499836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.499845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.499863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.535 [2024-06-10 11:49:05.509724] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.535 [2024-06-10 11:49:05.509817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.509833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.509843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.509851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.509868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.535 [2024-06-10 11:49:05.519757] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.535 [2024-06-10 11:49:05.519841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.519858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.519868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.519876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.519893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.535 [2024-06-10 11:49:05.529828] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.535 [2024-06-10 11:49:05.529917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.529935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.529947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.529956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.529973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.535 [2024-06-10 11:49:05.539798] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.535 [2024-06-10 11:49:05.539889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.539906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.539915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.539924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.539942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.535 [2024-06-10 11:49:05.549849] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.535 [2024-06-10 11:49:05.549931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.549948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.549958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.549966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.549984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.535 [2024-06-10 11:49:05.559892] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.535 [2024-06-10 11:49:05.559971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.559988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.559997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.560005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.560023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.535 [2024-06-10 11:49:05.569907] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.535 [2024-06-10 11:49:05.569994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.570011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.570020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.570028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.570045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.535 [2024-06-10 11:49:05.579917] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.535 [2024-06-10 11:49:05.580041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.535 [2024-06-10 11:49:05.580059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.535 [2024-06-10 11:49:05.580068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.535 [2024-06-10 11:49:05.580076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.535 [2024-06-10 11:49:05.580095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.535 qpair failed and we were unable to recover it. 00:40:40.536 [2024-06-10 11:49:05.589959] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.536 [2024-06-10 11:49:05.590049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.536 [2024-06-10 11:49:05.590067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.536 [2024-06-10 11:49:05.590076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.536 [2024-06-10 11:49:05.590085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.536 [2024-06-10 11:49:05.590102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.536 qpair failed and we were unable to recover it. 00:40:40.536 [2024-06-10 11:49:05.599992] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.536 [2024-06-10 11:49:05.600079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.536 [2024-06-10 11:49:05.600096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.536 [2024-06-10 11:49:05.600105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.536 [2024-06-10 11:49:05.600114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.536 [2024-06-10 11:49:05.600131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.536 qpair failed and we were unable to recover it. 00:40:40.536 [2024-06-10 11:49:05.609998] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.536 [2024-06-10 11:49:05.610084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.536 [2024-06-10 11:49:05.610101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.536 [2024-06-10 11:49:05.610110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.536 [2024-06-10 11:49:05.610119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.536 [2024-06-10 11:49:05.610136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.536 qpair failed and we were unable to recover it. 00:40:40.536 [2024-06-10 11:49:05.620035] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.536 [2024-06-10 11:49:05.620126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.536 [2024-06-10 11:49:05.620147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.536 [2024-06-10 11:49:05.620156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.536 [2024-06-10 11:49:05.620165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.536 [2024-06-10 11:49:05.620182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.536 qpair failed and we were unable to recover it. 00:40:40.536 [2024-06-10 11:49:05.630083] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.536 [2024-06-10 11:49:05.630161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.536 [2024-06-10 11:49:05.630178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.536 [2024-06-10 11:49:05.630188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.536 [2024-06-10 11:49:05.630196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.536 [2024-06-10 11:49:05.630214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.536 qpair failed and we were unable to recover it. 00:40:40.795 [2024-06-10 11:49:05.640106] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.795 [2024-06-10 11:49:05.640189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.795 [2024-06-10 11:49:05.640207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.795 [2024-06-10 11:49:05.640216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.796 [2024-06-10 11:49:05.640224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.796 [2024-06-10 11:49:05.640242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.796 qpair failed and we were unable to recover it. 00:40:40.796 [2024-06-10 11:49:05.650135] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.796 [2024-06-10 11:49:05.650220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.796 [2024-06-10 11:49:05.650237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.796 [2024-06-10 11:49:05.650246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.796 [2024-06-10 11:49:05.650255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.796 [2024-06-10 11:49:05.650272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.796 qpair failed and we were unable to recover it. 00:40:40.796 [2024-06-10 11:49:05.660193] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.796 [2024-06-10 11:49:05.660273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.796 [2024-06-10 11:49:05.660290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.796 [2024-06-10 11:49:05.660300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.796 [2024-06-10 11:49:05.660308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.796 [2024-06-10 11:49:05.660328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.796 qpair failed and we were unable to recover it. 00:40:40.796 [2024-06-10 11:49:05.670188] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.796 [2024-06-10 11:49:05.670274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.796 [2024-06-10 11:49:05.670292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.796 [2024-06-10 11:49:05.670302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.796 [2024-06-10 11:49:05.670310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.796 [2024-06-10 11:49:05.670328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.796 qpair failed and we were unable to recover it. 00:40:40.796 [2024-06-10 11:49:05.680219] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.796 [2024-06-10 11:49:05.680379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.796 [2024-06-10 11:49:05.680397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.796 [2024-06-10 11:49:05.680406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.796 [2024-06-10 11:49:05.680415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.796 [2024-06-10 11:49:05.680432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.796 qpair failed and we were unable to recover it. 00:40:40.796 [2024-06-10 11:49:05.690236] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.796 [2024-06-10 11:49:05.690323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.796 [2024-06-10 11:49:05.690340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.796 [2024-06-10 11:49:05.690349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.796 [2024-06-10 11:49:05.690357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.796 [2024-06-10 11:49:05.690375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.796 qpair failed and we were unable to recover it. 00:40:40.796 [2024-06-10 11:49:05.700267] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.796 [2024-06-10 11:49:05.700351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.796 [2024-06-10 11:49:05.700368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.796 [2024-06-10 11:49:05.700378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.796 [2024-06-10 11:49:05.700386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.796 [2024-06-10 11:49:05.700403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.796 qpair failed and we were unable to recover it. 00:40:40.796 [2024-06-10 11:49:05.710302] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.796 [2024-06-10 11:49:05.710389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.796 [2024-06-10 11:49:05.710409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.796 [2024-06-10 11:49:05.710419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.796 [2024-06-10 11:49:05.710427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.796 [2024-06-10 11:49:05.710444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.796 qpair failed and we were unable to recover it. 00:40:40.796 [2024-06-10 11:49:05.720336] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.796 [2024-06-10 11:49:05.720416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.796 [2024-06-10 11:49:05.720435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.796 [2024-06-10 11:49:05.720445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.796 [2024-06-10 11:49:05.720453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.796 [2024-06-10 11:49:05.720472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.796 qpair failed and we were unable to recover it. 00:40:40.796 [2024-06-10 11:49:05.730349] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.796 [2024-06-10 11:49:05.730438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.796 [2024-06-10 11:49:05.730456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.796 [2024-06-10 11:49:05.730465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.796 [2024-06-10 11:49:05.730474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.796 [2024-06-10 11:49:05.730491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.796 qpair failed and we were unable to recover it. 00:40:40.796 [2024-06-10 11:49:05.740387] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.796 [2024-06-10 11:49:05.740509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.796 [2024-06-10 11:49:05.740527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.796 [2024-06-10 11:49:05.740536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.796 [2024-06-10 11:49:05.740545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.796 [2024-06-10 11:49:05.740562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.796 qpair failed and we were unable to recover it. 00:40:40.796 [2024-06-10 11:49:05.750418] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.796 [2024-06-10 11:49:05.750500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.796 [2024-06-10 11:49:05.750517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.796 [2024-06-10 11:49:05.750527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.796 [2024-06-10 11:49:05.750538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.796 [2024-06-10 11:49:05.750556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.796 qpair failed and we were unable to recover it. 00:40:40.796 [2024-06-10 11:49:05.760435] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.796 [2024-06-10 11:49:05.760518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.760535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.760545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.760553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.797 [2024-06-10 11:49:05.760571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.797 qpair failed and we were unable to recover it. 00:40:40.797 [2024-06-10 11:49:05.770468] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.797 [2024-06-10 11:49:05.770561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.770583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.770593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.770602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.797 [2024-06-10 11:49:05.770620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.797 qpair failed and we were unable to recover it. 00:40:40.797 [2024-06-10 11:49:05.780490] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.797 [2024-06-10 11:49:05.780581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.780599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.780609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.780618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.797 [2024-06-10 11:49:05.780636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.797 qpair failed and we were unable to recover it. 00:40:40.797 [2024-06-10 11:49:05.790531] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.797 [2024-06-10 11:49:05.790618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.790635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.790644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.790653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.797 [2024-06-10 11:49:05.790671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.797 qpair failed and we were unable to recover it. 00:40:40.797 [2024-06-10 11:49:05.800604] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.797 [2024-06-10 11:49:05.800693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.800710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.800719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.800727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.797 [2024-06-10 11:49:05.800745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.797 qpair failed and we were unable to recover it. 00:40:40.797 [2024-06-10 11:49:05.810591] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.797 [2024-06-10 11:49:05.810675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.810693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.810702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.810711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.797 [2024-06-10 11:49:05.810728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.797 qpair failed and we were unable to recover it. 00:40:40.797 [2024-06-10 11:49:05.820618] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.797 [2024-06-10 11:49:05.820708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.820726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.820735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.820744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.797 [2024-06-10 11:49:05.820761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.797 qpair failed and we were unable to recover it. 00:40:40.797 [2024-06-10 11:49:05.830649] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.797 [2024-06-10 11:49:05.830741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.830758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.830768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.830776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.797 [2024-06-10 11:49:05.830794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.797 qpair failed and we were unable to recover it. 00:40:40.797 [2024-06-10 11:49:05.840662] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.797 [2024-06-10 11:49:05.840746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.840764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.840773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.840784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.797 [2024-06-10 11:49:05.840802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.797 qpair failed and we were unable to recover it. 00:40:40.797 [2024-06-10 11:49:05.850694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.797 [2024-06-10 11:49:05.850781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.850799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.850808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.850817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.797 [2024-06-10 11:49:05.850835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.797 qpair failed and we were unable to recover it. 00:40:40.797 [2024-06-10 11:49:05.860728] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.797 [2024-06-10 11:49:05.860814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.860832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.860841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.860849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.797 [2024-06-10 11:49:05.860867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.797 qpair failed and we were unable to recover it. 00:40:40.797 [2024-06-10 11:49:05.870778] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.797 [2024-06-10 11:49:05.870933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.870950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.870960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.870968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.797 [2024-06-10 11:49:05.870986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.797 qpair failed and we were unable to recover it. 00:40:40.797 [2024-06-10 11:49:05.880733] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.797 [2024-06-10 11:49:05.880817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.880834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.880844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.880852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.797 [2024-06-10 11:49:05.880869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.797 qpair failed and we were unable to recover it. 00:40:40.797 [2024-06-10 11:49:05.890809] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:40.797 [2024-06-10 11:49:05.890896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:40.797 [2024-06-10 11:49:05.890913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:40.797 [2024-06-10 11:49:05.890922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:40.797 [2024-06-10 11:49:05.890931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:40.798 [2024-06-10 11:49:05.890948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:40.798 qpair failed and we were unable to recover it. 00:40:41.057 [2024-06-10 11:49:05.900892] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.057 [2024-06-10 11:49:05.901081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.057 [2024-06-10 11:49:05.901099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.057 [2024-06-10 11:49:05.901108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.057 [2024-06-10 11:49:05.901117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.057 [2024-06-10 11:49:05.901135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.057 qpair failed and we were unable to recover it. 00:40:41.057 [2024-06-10 11:49:05.910893] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.057 [2024-06-10 11:49:05.910978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.057 [2024-06-10 11:49:05.910995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.057 [2024-06-10 11:49:05.911005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.057 [2024-06-10 11:49:05.911013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.057 [2024-06-10 11:49:05.911031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.057 qpair failed and we were unable to recover it. 00:40:41.057 [2024-06-10 11:49:05.920907] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.057 [2024-06-10 11:49:05.920992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.057 [2024-06-10 11:49:05.921009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.057 [2024-06-10 11:49:05.921018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.057 [2024-06-10 11:49:05.921026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.057 [2024-06-10 11:49:05.921044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.057 qpair failed and we were unable to recover it. 00:40:41.057 [2024-06-10 11:49:05.930942] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:05.931065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.058 [2024-06-10 11:49:05.931082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.058 [2024-06-10 11:49:05.931094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.058 [2024-06-10 11:49:05.931103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.058 [2024-06-10 11:49:05.931120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.058 qpair failed and we were unable to recover it. 00:40:41.058 [2024-06-10 11:49:05.940936] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:05.941024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.058 [2024-06-10 11:49:05.941042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.058 [2024-06-10 11:49:05.941051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.058 [2024-06-10 11:49:05.941059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.058 [2024-06-10 11:49:05.941077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.058 qpair failed and we were unable to recover it. 00:40:41.058 [2024-06-10 11:49:05.951023] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:05.951110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.058 [2024-06-10 11:49:05.951127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.058 [2024-06-10 11:49:05.951137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.058 [2024-06-10 11:49:05.951145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.058 [2024-06-10 11:49:05.951163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.058 qpair failed and we were unable to recover it. 00:40:41.058 [2024-06-10 11:49:05.961012] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:05.961104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.058 [2024-06-10 11:49:05.961121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.058 [2024-06-10 11:49:05.961131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.058 [2024-06-10 11:49:05.961139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.058 [2024-06-10 11:49:05.961156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.058 qpair failed and we were unable to recover it. 00:40:41.058 [2024-06-10 11:49:05.971052] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:05.971140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.058 [2024-06-10 11:49:05.971157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.058 [2024-06-10 11:49:05.971166] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.058 [2024-06-10 11:49:05.971175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.058 [2024-06-10 11:49:05.971192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.058 qpair failed and we were unable to recover it. 00:40:41.058 [2024-06-10 11:49:05.981091] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:05.981179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.058 [2024-06-10 11:49:05.981197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.058 [2024-06-10 11:49:05.981206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.058 [2024-06-10 11:49:05.981215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.058 [2024-06-10 11:49:05.981232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.058 qpair failed and we were unable to recover it. 00:40:41.058 [2024-06-10 11:49:05.991190] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:05.991290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.058 [2024-06-10 11:49:05.991307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.058 [2024-06-10 11:49:05.991316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.058 [2024-06-10 11:49:05.991325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.058 [2024-06-10 11:49:05.991343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.058 qpair failed and we were unable to recover it. 00:40:41.058 [2024-06-10 11:49:06.001146] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:06.001233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.058 [2024-06-10 11:49:06.001250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.058 [2024-06-10 11:49:06.001259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.058 [2024-06-10 11:49:06.001268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.058 [2024-06-10 11:49:06.001286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.058 qpair failed and we were unable to recover it. 00:40:41.058 [2024-06-10 11:49:06.011167] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:06.011252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.058 [2024-06-10 11:49:06.011269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.058 [2024-06-10 11:49:06.011278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.058 [2024-06-10 11:49:06.011287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.058 [2024-06-10 11:49:06.011304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.058 qpair failed and we were unable to recover it. 00:40:41.058 [2024-06-10 11:49:06.021195] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:06.021361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.058 [2024-06-10 11:49:06.021382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.058 [2024-06-10 11:49:06.021391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.058 [2024-06-10 11:49:06.021400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.058 [2024-06-10 11:49:06.021417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.058 qpair failed and we were unable to recover it. 00:40:41.058 [2024-06-10 11:49:06.031233] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:06.031320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.058 [2024-06-10 11:49:06.031337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.058 [2024-06-10 11:49:06.031347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.058 [2024-06-10 11:49:06.031355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.058 [2024-06-10 11:49:06.031372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.058 qpair failed and we were unable to recover it. 00:40:41.058 [2024-06-10 11:49:06.041276] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:06.041357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.058 [2024-06-10 11:49:06.041375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.058 [2024-06-10 11:49:06.041384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.058 [2024-06-10 11:49:06.041392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.058 [2024-06-10 11:49:06.041410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.058 qpair failed and we were unable to recover it. 00:40:41.058 [2024-06-10 11:49:06.051297] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:06.051385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.058 [2024-06-10 11:49:06.051402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.058 [2024-06-10 11:49:06.051411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.058 [2024-06-10 11:49:06.051419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.058 [2024-06-10 11:49:06.051437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.058 qpair failed and we were unable to recover it. 00:40:41.058 [2024-06-10 11:49:06.061313] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.058 [2024-06-10 11:49:06.061398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.059 [2024-06-10 11:49:06.061415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.059 [2024-06-10 11:49:06.061424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.059 [2024-06-10 11:49:06.061433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.059 [2024-06-10 11:49:06.061456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.059 qpair failed and we were unable to recover it. 00:40:41.059 [2024-06-10 11:49:06.071369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.059 [2024-06-10 11:49:06.071465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.059 [2024-06-10 11:49:06.071483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.059 [2024-06-10 11:49:06.071492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.059 [2024-06-10 11:49:06.071500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.059 [2024-06-10 11:49:06.071518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.059 qpair failed and we were unable to recover it. 00:40:41.059 [2024-06-10 11:49:06.081389] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.059 [2024-06-10 11:49:06.081472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.059 [2024-06-10 11:49:06.081490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.059 [2024-06-10 11:49:06.081499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.059 [2024-06-10 11:49:06.081508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.059 [2024-06-10 11:49:06.081526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.059 qpair failed and we were unable to recover it. 00:40:41.059 [2024-06-10 11:49:06.091405] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.059 [2024-06-10 11:49:06.091490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.059 [2024-06-10 11:49:06.091507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.059 [2024-06-10 11:49:06.091516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.059 [2024-06-10 11:49:06.091525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.059 [2024-06-10 11:49:06.091542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.059 qpair failed and we were unable to recover it. 00:40:41.059 [2024-06-10 11:49:06.101439] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.059 [2024-06-10 11:49:06.101526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.059 [2024-06-10 11:49:06.101543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.059 [2024-06-10 11:49:06.101553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.059 [2024-06-10 11:49:06.101561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.059 [2024-06-10 11:49:06.101582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.059 qpair failed and we were unable to recover it. 00:40:41.059 [2024-06-10 11:49:06.111407] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.059 [2024-06-10 11:49:06.111491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.059 [2024-06-10 11:49:06.111512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.059 [2024-06-10 11:49:06.111522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.059 [2024-06-10 11:49:06.111530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.059 [2024-06-10 11:49:06.111548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.059 qpair failed and we were unable to recover it. 00:40:41.059 [2024-06-10 11:49:06.121504] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.059 [2024-06-10 11:49:06.121588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.059 [2024-06-10 11:49:06.121606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.059 [2024-06-10 11:49:06.121615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.059 [2024-06-10 11:49:06.121624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.059 [2024-06-10 11:49:06.121641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.059 qpair failed and we were unable to recover it. 00:40:41.059 [2024-06-10 11:49:06.131534] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.059 [2024-06-10 11:49:06.131648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.059 [2024-06-10 11:49:06.131665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.059 [2024-06-10 11:49:06.131674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.059 [2024-06-10 11:49:06.131683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.059 [2024-06-10 11:49:06.131700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.059 qpair failed and we were unable to recover it. 00:40:41.059 [2024-06-10 11:49:06.141570] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.059 [2024-06-10 11:49:06.141663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.059 [2024-06-10 11:49:06.141680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.059 [2024-06-10 11:49:06.141690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.059 [2024-06-10 11:49:06.141698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.059 [2024-06-10 11:49:06.141716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.059 qpair failed and we were unable to recover it. 00:40:41.059 [2024-06-10 11:49:06.151614] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.059 [2024-06-10 11:49:06.151695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.059 [2024-06-10 11:49:06.151713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.059 [2024-06-10 11:49:06.151722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.059 [2024-06-10 11:49:06.151731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.059 [2024-06-10 11:49:06.151752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.059 qpair failed and we were unable to recover it. 00:40:41.319 [2024-06-10 11:49:06.161663] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.319 [2024-06-10 11:49:06.161756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.319 [2024-06-10 11:49:06.161775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.319 [2024-06-10 11:49:06.161784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.319 [2024-06-10 11:49:06.161793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.319 [2024-06-10 11:49:06.161811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.319 qpair failed and we were unable to recover it. 00:40:41.319 [2024-06-10 11:49:06.171668] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.319 [2024-06-10 11:49:06.171755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.319 [2024-06-10 11:49:06.171774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.319 [2024-06-10 11:49:06.171783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.319 [2024-06-10 11:49:06.171792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.319 [2024-06-10 11:49:06.171810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.319 qpair failed and we were unable to recover it. 00:40:41.319 [2024-06-10 11:49:06.181695] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.319 [2024-06-10 11:49:06.181777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.319 [2024-06-10 11:49:06.181794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.319 [2024-06-10 11:49:06.181804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.319 [2024-06-10 11:49:06.181812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.319 [2024-06-10 11:49:06.181830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.319 qpair failed and we were unable to recover it. 00:40:41.319 [2024-06-10 11:49:06.191717] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.319 [2024-06-10 11:49:06.191806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.319 [2024-06-10 11:49:06.191823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.319 [2024-06-10 11:49:06.191832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.319 [2024-06-10 11:49:06.191841] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.319 [2024-06-10 11:49:06.191859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.319 qpair failed and we were unable to recover it. 00:40:41.319 [2024-06-10 11:49:06.201767] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.319 [2024-06-10 11:49:06.201854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.319 [2024-06-10 11:49:06.201871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.319 [2024-06-10 11:49:06.201881] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.319 [2024-06-10 11:49:06.201889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.319 [2024-06-10 11:49:06.201907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.319 qpair failed and we were unable to recover it. 00:40:41.319 [2024-06-10 11:49:06.211766] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.319 [2024-06-10 11:49:06.211854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.319 [2024-06-10 11:49:06.211871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.319 [2024-06-10 11:49:06.211880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.319 [2024-06-10 11:49:06.211889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.319 [2024-06-10 11:49:06.211907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.319 qpair failed and we were unable to recover it. 00:40:41.319 [2024-06-10 11:49:06.221742] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.319 [2024-06-10 11:49:06.221838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.319 [2024-06-10 11:49:06.221855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.221864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.221873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.221891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.320 [2024-06-10 11:49:06.231853] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.320 [2024-06-10 11:49:06.231949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.320 [2024-06-10 11:49:06.231967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.231976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.231984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.232001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.320 [2024-06-10 11:49:06.241815] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.320 [2024-06-10 11:49:06.241905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.320 [2024-06-10 11:49:06.241922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.241931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.241942] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.241960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.320 [2024-06-10 11:49:06.251862] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.320 [2024-06-10 11:49:06.251948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.320 [2024-06-10 11:49:06.251965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.251974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.251983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.252000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.320 [2024-06-10 11:49:06.261840] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.320 [2024-06-10 11:49:06.261930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.320 [2024-06-10 11:49:06.261948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.261957] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.261966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.261983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.320 [2024-06-10 11:49:06.271890] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.320 [2024-06-10 11:49:06.271978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.320 [2024-06-10 11:49:06.271995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.272004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.272013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.272030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.320 [2024-06-10 11:49:06.281990] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.320 [2024-06-10 11:49:06.282079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.320 [2024-06-10 11:49:06.282096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.282105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.282113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.282131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.320 [2024-06-10 11:49:06.292001] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.320 [2024-06-10 11:49:06.292088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.320 [2024-06-10 11:49:06.292105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.292115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.292123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.292141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.320 [2024-06-10 11:49:06.302039] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.320 [2024-06-10 11:49:06.302135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.320 [2024-06-10 11:49:06.302152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.302162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.302170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.302187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.320 [2024-06-10 11:49:06.312065] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.320 [2024-06-10 11:49:06.312152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.320 [2024-06-10 11:49:06.312169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.312178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.312186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.312204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.320 [2024-06-10 11:49:06.322085] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.320 [2024-06-10 11:49:06.322169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.320 [2024-06-10 11:49:06.322186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.322195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.322204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.322221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.320 [2024-06-10 11:49:06.332124] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.320 [2024-06-10 11:49:06.332245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.320 [2024-06-10 11:49:06.332263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.332275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.332283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.332301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.320 [2024-06-10 11:49:06.342151] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.320 [2024-06-10 11:49:06.342315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.320 [2024-06-10 11:49:06.342332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.342341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.342349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.342367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.320 [2024-06-10 11:49:06.352190] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.320 [2024-06-10 11:49:06.352276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.320 [2024-06-10 11:49:06.352296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.320 [2024-06-10 11:49:06.352306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.320 [2024-06-10 11:49:06.352314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.320 [2024-06-10 11:49:06.352333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.320 qpair failed and we were unable to recover it. 00:40:41.321 [2024-06-10 11:49:06.362142] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.321 [2024-06-10 11:49:06.362227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.321 [2024-06-10 11:49:06.362245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.321 [2024-06-10 11:49:06.362254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.321 [2024-06-10 11:49:06.362262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.321 [2024-06-10 11:49:06.362280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.321 qpair failed and we were unable to recover it. 00:40:41.321 [2024-06-10 11:49:06.372185] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.321 [2024-06-10 11:49:06.372270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.321 [2024-06-10 11:49:06.372288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.321 [2024-06-10 11:49:06.372297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.321 [2024-06-10 11:49:06.372305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.321 [2024-06-10 11:49:06.372324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.321 qpair failed and we were unable to recover it. 00:40:41.321 [2024-06-10 11:49:06.382277] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.321 [2024-06-10 11:49:06.382367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.321 [2024-06-10 11:49:06.382384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.321 [2024-06-10 11:49:06.382393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.321 [2024-06-10 11:49:06.382402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.321 [2024-06-10 11:49:06.382420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.321 qpair failed and we were unable to recover it. 00:40:41.321 [2024-06-10 11:49:06.392226] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.321 [2024-06-10 11:49:06.392314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.321 [2024-06-10 11:49:06.392331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.321 [2024-06-10 11:49:06.392340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.321 [2024-06-10 11:49:06.392349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.321 [2024-06-10 11:49:06.392367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.321 qpair failed and we were unable to recover it. 00:40:41.321 [2024-06-10 11:49:06.402352] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.321 [2024-06-10 11:49:06.402432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.321 [2024-06-10 11:49:06.402450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.321 [2024-06-10 11:49:06.402459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.321 [2024-06-10 11:49:06.402468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.321 [2024-06-10 11:49:06.402485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.321 qpair failed and we were unable to recover it. 00:40:41.321 [2024-06-10 11:49:06.412352] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.321 [2024-06-10 11:49:06.412436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.321 [2024-06-10 11:49:06.412454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.321 [2024-06-10 11:49:06.412463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.321 [2024-06-10 11:49:06.412472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.321 [2024-06-10 11:49:06.412490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.321 qpair failed and we were unable to recover it. 00:40:41.581 [2024-06-10 11:49:06.422335] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.581 [2024-06-10 11:49:06.422428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.581 [2024-06-10 11:49:06.422447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.581 [2024-06-10 11:49:06.422459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.581 [2024-06-10 11:49:06.422468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.581 [2024-06-10 11:49:06.422486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.581 qpair failed and we were unable to recover it. 00:40:41.581 [2024-06-10 11:49:06.432421] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.581 [2024-06-10 11:49:06.432507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.581 [2024-06-10 11:49:06.432525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.581 [2024-06-10 11:49:06.432535] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.581 [2024-06-10 11:49:06.432543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.581 [2024-06-10 11:49:06.432561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.581 qpair failed and we were unable to recover it. 00:40:41.581 [2024-06-10 11:49:06.442456] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.442539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.442557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.442567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.582 [2024-06-10 11:49:06.442580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.582 [2024-06-10 11:49:06.442599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.582 qpair failed and we were unable to recover it. 00:40:41.582 [2024-06-10 11:49:06.452468] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.452614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.452632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.452641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.582 [2024-06-10 11:49:06.452650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.582 [2024-06-10 11:49:06.452668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.582 qpair failed and we were unable to recover it. 00:40:41.582 [2024-06-10 11:49:06.462540] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.462631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.462649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.462658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.582 [2024-06-10 11:49:06.462667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.582 [2024-06-10 11:49:06.462684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.582 qpair failed and we were unable to recover it. 00:40:41.582 [2024-06-10 11:49:06.472539] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.472714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.472731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.472740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.582 [2024-06-10 11:49:06.472749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.582 [2024-06-10 11:49:06.472767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.582 qpair failed and we were unable to recover it. 00:40:41.582 [2024-06-10 11:49:06.482548] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.482639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.482657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.482666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.582 [2024-06-10 11:49:06.482674] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.582 [2024-06-10 11:49:06.482692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.582 qpair failed and we were unable to recover it. 00:40:41.582 [2024-06-10 11:49:06.492552] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.492644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.492661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.492670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.582 [2024-06-10 11:49:06.492679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.582 [2024-06-10 11:49:06.492697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.582 qpair failed and we were unable to recover it. 00:40:41.582 [2024-06-10 11:49:06.502623] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.502724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.502741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.502751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.582 [2024-06-10 11:49:06.502759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.582 [2024-06-10 11:49:06.502777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.582 qpair failed and we were unable to recover it. 00:40:41.582 [2024-06-10 11:49:06.512644] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.512731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.512751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.512760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.582 [2024-06-10 11:49:06.512769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.582 [2024-06-10 11:49:06.512787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.582 qpair failed and we were unable to recover it. 00:40:41.582 [2024-06-10 11:49:06.522666] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.522751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.522769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.522778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.582 [2024-06-10 11:49:06.522786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.582 [2024-06-10 11:49:06.522804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.582 qpair failed and we were unable to recover it. 00:40:41.582 [2024-06-10 11:49:06.532692] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.532779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.532797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.532806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.582 [2024-06-10 11:49:06.532814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.582 [2024-06-10 11:49:06.532832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.582 qpair failed and we were unable to recover it. 00:40:41.582 [2024-06-10 11:49:06.542761] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.542846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.542864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.542873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.582 [2024-06-10 11:49:06.542881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.582 [2024-06-10 11:49:06.542899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.582 qpair failed and we were unable to recover it. 00:40:41.582 [2024-06-10 11:49:06.552774] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.552862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.552880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.552889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.582 [2024-06-10 11:49:06.552897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.582 [2024-06-10 11:49:06.552918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.582 qpair failed and we were unable to recover it. 00:40:41.582 [2024-06-10 11:49:06.562767] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.562882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.562900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.562909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.582 [2024-06-10 11:49:06.562917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.582 [2024-06-10 11:49:06.562935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.582 qpair failed and we were unable to recover it. 00:40:41.582 [2024-06-10 11:49:06.572783] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.582 [2024-06-10 11:49:06.572918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.582 [2024-06-10 11:49:06.572935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.582 [2024-06-10 11:49:06.572945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.583 [2024-06-10 11:49:06.572953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.583 [2024-06-10 11:49:06.572971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.583 qpair failed and we were unable to recover it. 00:40:41.583 [2024-06-10 11:49:06.582862] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.583 [2024-06-10 11:49:06.582958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.583 [2024-06-10 11:49:06.582976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.583 [2024-06-10 11:49:06.582985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.583 [2024-06-10 11:49:06.582994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.583 [2024-06-10 11:49:06.583011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.583 qpair failed and we were unable to recover it. 00:40:41.583 [2024-06-10 11:49:06.592832] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.583 [2024-06-10 11:49:06.592918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.583 [2024-06-10 11:49:06.592935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.583 [2024-06-10 11:49:06.592944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.583 [2024-06-10 11:49:06.592953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.583 [2024-06-10 11:49:06.592970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.583 qpair failed and we were unable to recover it. 00:40:41.583 [2024-06-10 11:49:06.602888] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.583 [2024-06-10 11:49:06.603007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.583 [2024-06-10 11:49:06.603027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.583 [2024-06-10 11:49:06.603036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.583 [2024-06-10 11:49:06.603045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.583 [2024-06-10 11:49:06.603062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.583 qpair failed and we were unable to recover it. 00:40:41.583 [2024-06-10 11:49:06.612889] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.583 [2024-06-10 11:49:06.612973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.583 [2024-06-10 11:49:06.612990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.583 [2024-06-10 11:49:06.612999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.583 [2024-06-10 11:49:06.613007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.583 [2024-06-10 11:49:06.613025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.583 qpair failed and we were unable to recover it. 00:40:41.583 [2024-06-10 11:49:06.622989] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.583 [2024-06-10 11:49:06.623069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.583 [2024-06-10 11:49:06.623087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.583 [2024-06-10 11:49:06.623096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.583 [2024-06-10 11:49:06.623104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.583 [2024-06-10 11:49:06.623122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.583 qpair failed and we were unable to recover it. 00:40:41.583 [2024-06-10 11:49:06.632968] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.583 [2024-06-10 11:49:06.633052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.583 [2024-06-10 11:49:06.633069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.583 [2024-06-10 11:49:06.633079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.583 [2024-06-10 11:49:06.633087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.583 [2024-06-10 11:49:06.633104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.583 qpair failed and we were unable to recover it. 00:40:41.583 [2024-06-10 11:49:06.642975] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.583 [2024-06-10 11:49:06.643055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.583 [2024-06-10 11:49:06.643072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.583 [2024-06-10 11:49:06.643081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.583 [2024-06-10 11:49:06.643093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.583 [2024-06-10 11:49:06.643110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.583 qpair failed and we were unable to recover it. 00:40:41.583 [2024-06-10 11:49:06.653065] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.583 [2024-06-10 11:49:06.653154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.583 [2024-06-10 11:49:06.653171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.583 [2024-06-10 11:49:06.653180] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.583 [2024-06-10 11:49:06.653188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.583 [2024-06-10 11:49:06.653206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.583 qpair failed and we were unable to recover it. 00:40:41.583 [2024-06-10 11:49:06.663031] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.583 [2024-06-10 11:49:06.663118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.583 [2024-06-10 11:49:06.663135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.583 [2024-06-10 11:49:06.663145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.583 [2024-06-10 11:49:06.663153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.583 [2024-06-10 11:49:06.663171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.583 qpair failed and we were unable to recover it. 00:40:41.583 [2024-06-10 11:49:06.673200] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.583 [2024-06-10 11:49:06.673355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.583 [2024-06-10 11:49:06.673372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.583 [2024-06-10 11:49:06.673381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.583 [2024-06-10 11:49:06.673390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.583 [2024-06-10 11:49:06.673408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.583 qpair failed and we were unable to recover it. 00:40:41.583 [2024-06-10 11:49:06.683188] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.583 [2024-06-10 11:49:06.683300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.583 [2024-06-10 11:49:06.683321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.583 [2024-06-10 11:49:06.683331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.583 [2024-06-10 11:49:06.683339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.583 [2024-06-10 11:49:06.683358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.583 qpair failed and we were unable to recover it. 00:40:41.843 [2024-06-10 11:49:06.693182] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.843 [2024-06-10 11:49:06.693316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.843 [2024-06-10 11:49:06.693334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.843 [2024-06-10 11:49:06.693344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.843 [2024-06-10 11:49:06.693352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.843 [2024-06-10 11:49:06.693370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.843 qpair failed and we were unable to recover it. 00:40:41.843 [2024-06-10 11:49:06.703232] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.703335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.703352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.844 [2024-06-10 11:49:06.703362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.844 [2024-06-10 11:49:06.703370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.844 [2024-06-10 11:49:06.703388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.844 qpair failed and we were unable to recover it. 00:40:41.844 [2024-06-10 11:49:06.713256] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.713345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.713364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.844 [2024-06-10 11:49:06.713377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.844 [2024-06-10 11:49:06.713391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.844 [2024-06-10 11:49:06.713417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.844 qpair failed and we were unable to recover it. 00:40:41.844 [2024-06-10 11:49:06.723312] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.723412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.723430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.844 [2024-06-10 11:49:06.723439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.844 [2024-06-10 11:49:06.723448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.844 [2024-06-10 11:49:06.723466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.844 qpair failed and we were unable to recover it. 00:40:41.844 [2024-06-10 11:49:06.733306] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.733390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.733408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.844 [2024-06-10 11:49:06.733420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.844 [2024-06-10 11:49:06.733429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.844 [2024-06-10 11:49:06.733447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.844 qpair failed and we were unable to recover it. 00:40:41.844 [2024-06-10 11:49:06.743408] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.743544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.743561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.844 [2024-06-10 11:49:06.743571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.844 [2024-06-10 11:49:06.743583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.844 [2024-06-10 11:49:06.743601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.844 qpair failed and we were unable to recover it. 00:40:41.844 [2024-06-10 11:49:06.753312] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.753397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.753414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.844 [2024-06-10 11:49:06.753423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.844 [2024-06-10 11:49:06.753432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.844 [2024-06-10 11:49:06.753449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.844 qpair failed and we were unable to recover it. 00:40:41.844 [2024-06-10 11:49:06.763401] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.763486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.763504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.844 [2024-06-10 11:49:06.763513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.844 [2024-06-10 11:49:06.763521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.844 [2024-06-10 11:49:06.763539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.844 qpair failed and we were unable to recover it. 00:40:41.844 [2024-06-10 11:49:06.773473] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.773562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.773587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.844 [2024-06-10 11:49:06.773597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.844 [2024-06-10 11:49:06.773605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.844 [2024-06-10 11:49:06.773623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.844 qpair failed and we were unable to recover it. 00:40:41.844 [2024-06-10 11:49:06.783460] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.783549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.783566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.844 [2024-06-10 11:49:06.783581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.844 [2024-06-10 11:49:06.783590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.844 [2024-06-10 11:49:06.783607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.844 qpair failed and we were unable to recover it. 00:40:41.844 [2024-06-10 11:49:06.793491] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.793582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.793599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.844 [2024-06-10 11:49:06.793609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.844 [2024-06-10 11:49:06.793617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.844 [2024-06-10 11:49:06.793635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.844 qpair failed and we were unable to recover it. 00:40:41.844 [2024-06-10 11:49:06.803519] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.803612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.803630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.844 [2024-06-10 11:49:06.803639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.844 [2024-06-10 11:49:06.803648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.844 [2024-06-10 11:49:06.803666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.844 qpair failed and we were unable to recover it. 00:40:41.844 [2024-06-10 11:49:06.813532] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.813620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.813638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.844 [2024-06-10 11:49:06.813647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.844 [2024-06-10 11:49:06.813655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.844 [2024-06-10 11:49:06.813673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.844 qpair failed and we were unable to recover it. 00:40:41.844 [2024-06-10 11:49:06.823617] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.823715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.823732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.844 [2024-06-10 11:49:06.823746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.844 [2024-06-10 11:49:06.823755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.844 [2024-06-10 11:49:06.823773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.844 qpair failed and we were unable to recover it. 00:40:41.844 [2024-06-10 11:49:06.833596] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.844 [2024-06-10 11:49:06.833683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.844 [2024-06-10 11:49:06.833701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.845 [2024-06-10 11:49:06.833710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.845 [2024-06-10 11:49:06.833718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.845 [2024-06-10 11:49:06.833736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.845 qpair failed and we were unable to recover it. 00:40:41.845 [2024-06-10 11:49:06.843581] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.845 [2024-06-10 11:49:06.843711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.845 [2024-06-10 11:49:06.843728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.845 [2024-06-10 11:49:06.843737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.845 [2024-06-10 11:49:06.843746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.845 [2024-06-10 11:49:06.843764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.845 qpair failed and we were unable to recover it. 00:40:41.845 [2024-06-10 11:49:06.853663] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.845 [2024-06-10 11:49:06.853746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.845 [2024-06-10 11:49:06.853764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.845 [2024-06-10 11:49:06.853773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.845 [2024-06-10 11:49:06.853781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.845 [2024-06-10 11:49:06.853799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.845 qpair failed and we were unable to recover it. 00:40:41.845 [2024-06-10 11:49:06.863696] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.845 [2024-06-10 11:49:06.863787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.845 [2024-06-10 11:49:06.863805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.845 [2024-06-10 11:49:06.863814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.845 [2024-06-10 11:49:06.863822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.845 [2024-06-10 11:49:06.863840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.845 qpair failed and we were unable to recover it. 00:40:41.845 [2024-06-10 11:49:06.873804] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.845 [2024-06-10 11:49:06.873937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.845 [2024-06-10 11:49:06.873954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.845 [2024-06-10 11:49:06.873963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.845 [2024-06-10 11:49:06.873972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.845 [2024-06-10 11:49:06.873989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.845 qpair failed and we were unable to recover it. 00:40:41.845 [2024-06-10 11:49:06.883745] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.845 [2024-06-10 11:49:06.883828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.845 [2024-06-10 11:49:06.883845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.845 [2024-06-10 11:49:06.883855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.845 [2024-06-10 11:49:06.883863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.845 [2024-06-10 11:49:06.883881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.845 qpair failed and we were unable to recover it. 00:40:41.845 [2024-06-10 11:49:06.893788] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.845 [2024-06-10 11:49:06.893876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.845 [2024-06-10 11:49:06.893893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.845 [2024-06-10 11:49:06.893903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.845 [2024-06-10 11:49:06.893911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.845 [2024-06-10 11:49:06.893929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.845 qpair failed and we were unable to recover it. 00:40:41.845 [2024-06-10 11:49:06.903742] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.845 [2024-06-10 11:49:06.903829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.845 [2024-06-10 11:49:06.903846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.845 [2024-06-10 11:49:06.903855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.845 [2024-06-10 11:49:06.903864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.845 [2024-06-10 11:49:06.903881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.845 qpair failed and we were unable to recover it. 00:40:41.845 [2024-06-10 11:49:06.913867] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.845 [2024-06-10 11:49:06.913955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.845 [2024-06-10 11:49:06.913975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.845 [2024-06-10 11:49:06.913985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.845 [2024-06-10 11:49:06.913993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.845 [2024-06-10 11:49:06.914011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.845 qpair failed and we were unable to recover it. 00:40:41.845 [2024-06-10 11:49:06.923865] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.845 [2024-06-10 11:49:06.923956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.845 [2024-06-10 11:49:06.923973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.845 [2024-06-10 11:49:06.923982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.845 [2024-06-10 11:49:06.923991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.845 [2024-06-10 11:49:06.924008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.845 qpair failed and we were unable to recover it. 00:40:41.845 [2024-06-10 11:49:06.933941] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.845 [2024-06-10 11:49:06.934060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.845 [2024-06-10 11:49:06.934077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.845 [2024-06-10 11:49:06.934086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.845 [2024-06-10 11:49:06.934094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.845 [2024-06-10 11:49:06.934111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.845 qpair failed and we were unable to recover it. 00:40:41.845 [2024-06-10 11:49:06.943894] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:41.845 [2024-06-10 11:49:06.943988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:41.845 [2024-06-10 11:49:06.944006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:41.845 [2024-06-10 11:49:06.944015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:41.845 [2024-06-10 11:49:06.944024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:41.845 [2024-06-10 11:49:06.944043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:41.845 qpair failed and we were unable to recover it. 00:40:42.105 [2024-06-10 11:49:06.953898] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.105 [2024-06-10 11:49:06.953980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.105 [2024-06-10 11:49:06.953998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.105 [2024-06-10 11:49:06.954008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.105 [2024-06-10 11:49:06.954016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.105 [2024-06-10 11:49:06.954038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.105 qpair failed and we were unable to recover it. 00:40:42.105 [2024-06-10 11:49:06.964038] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.105 [2024-06-10 11:49:06.964119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.105 [2024-06-10 11:49:06.964137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.105 [2024-06-10 11:49:06.964146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.105 [2024-06-10 11:49:06.964155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.105 [2024-06-10 11:49:06.964172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.105 qpair failed and we were unable to recover it. 00:40:42.105 [2024-06-10 11:49:06.973982] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.105 [2024-06-10 11:49:06.974072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.105 [2024-06-10 11:49:06.974090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.105 [2024-06-10 11:49:06.974099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.105 [2024-06-10 11:49:06.974108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.105 [2024-06-10 11:49:06.974125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.105 qpair failed and we were unable to recover it. 00:40:42.105 [2024-06-10 11:49:06.984077] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.105 [2024-06-10 11:49:06.984164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.105 [2024-06-10 11:49:06.984182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.105 [2024-06-10 11:49:06.984191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.105 [2024-06-10 11:49:06.984199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.105 [2024-06-10 11:49:06.984217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.105 qpair failed and we were unable to recover it. 00:40:42.105 [2024-06-10 11:49:06.994013] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.105 [2024-06-10 11:49:06.994097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.105 [2024-06-10 11:49:06.994115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:06.994124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.106 [2024-06-10 11:49:06.994133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.106 [2024-06-10 11:49:06.994151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.106 qpair failed and we were unable to recover it. 00:40:42.106 [2024-06-10 11:49:07.004152] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.106 [2024-06-10 11:49:07.004237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.106 [2024-06-10 11:49:07.004258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:07.004267] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.106 [2024-06-10 11:49:07.004275] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.106 [2024-06-10 11:49:07.004293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.106 qpair failed and we were unable to recover it. 00:40:42.106 [2024-06-10 11:49:07.014187] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.106 [2024-06-10 11:49:07.014305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.106 [2024-06-10 11:49:07.014322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:07.014332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.106 [2024-06-10 11:49:07.014341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.106 [2024-06-10 11:49:07.014358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.106 qpair failed and we were unable to recover it. 00:40:42.106 [2024-06-10 11:49:07.024178] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.106 [2024-06-10 11:49:07.024269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.106 [2024-06-10 11:49:07.024286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:07.024295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.106 [2024-06-10 11:49:07.024303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.106 [2024-06-10 11:49:07.024321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.106 qpair failed and we were unable to recover it. 00:40:42.106 [2024-06-10 11:49:07.034168] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.106 [2024-06-10 11:49:07.034261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.106 [2024-06-10 11:49:07.034278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:07.034287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.106 [2024-06-10 11:49:07.034295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.106 [2024-06-10 11:49:07.034313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.106 qpair failed and we were unable to recover it. 00:40:42.106 [2024-06-10 11:49:07.044253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.106 [2024-06-10 11:49:07.044335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.106 [2024-06-10 11:49:07.044352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:07.044362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.106 [2024-06-10 11:49:07.044373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.106 [2024-06-10 11:49:07.044392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.106 qpair failed and we were unable to recover it. 00:40:42.106 [2024-06-10 11:49:07.054288] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.106 [2024-06-10 11:49:07.054398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.106 [2024-06-10 11:49:07.054415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:07.054424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.106 [2024-06-10 11:49:07.054433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.106 [2024-06-10 11:49:07.054450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.106 qpair failed and we were unable to recover it. 00:40:42.106 [2024-06-10 11:49:07.064330] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.106 [2024-06-10 11:49:07.064438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.106 [2024-06-10 11:49:07.064455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:07.064464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.106 [2024-06-10 11:49:07.064473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.106 [2024-06-10 11:49:07.064490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.106 qpair failed and we were unable to recover it. 00:40:42.106 [2024-06-10 11:49:07.074348] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.106 [2024-06-10 11:49:07.074448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.106 [2024-06-10 11:49:07.074466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:07.074475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.106 [2024-06-10 11:49:07.074483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.106 [2024-06-10 11:49:07.074501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.106 qpair failed and we were unable to recover it. 00:40:42.106 [2024-06-10 11:49:07.084331] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.106 [2024-06-10 11:49:07.084413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.106 [2024-06-10 11:49:07.084431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:07.084440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.106 [2024-06-10 11:49:07.084448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.106 [2024-06-10 11:49:07.084466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.106 qpair failed and we were unable to recover it. 00:40:42.106 [2024-06-10 11:49:07.094379] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.106 [2024-06-10 11:49:07.094463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.106 [2024-06-10 11:49:07.094480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:07.094490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.106 [2024-06-10 11:49:07.094498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.106 [2024-06-10 11:49:07.094515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.106 qpair failed and we were unable to recover it. 00:40:42.106 [2024-06-10 11:49:07.104411] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.106 [2024-06-10 11:49:07.104497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.106 [2024-06-10 11:49:07.104514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:07.104523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.106 [2024-06-10 11:49:07.104531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.106 [2024-06-10 11:49:07.104549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.106 qpair failed and we were unable to recover it. 00:40:42.106 [2024-06-10 11:49:07.114444] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.106 [2024-06-10 11:49:07.114532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.106 [2024-06-10 11:49:07.114548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:07.114558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.106 [2024-06-10 11:49:07.114566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.106 [2024-06-10 11:49:07.114588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.106 qpair failed and we were unable to recover it. 00:40:42.106 [2024-06-10 11:49:07.124459] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.106 [2024-06-10 11:49:07.124547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.106 [2024-06-10 11:49:07.124565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.106 [2024-06-10 11:49:07.124574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.107 [2024-06-10 11:49:07.124586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.107 [2024-06-10 11:49:07.124604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.107 qpair failed and we were unable to recover it. 00:40:42.107 [2024-06-10 11:49:07.134452] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.107 [2024-06-10 11:49:07.134536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.107 [2024-06-10 11:49:07.134553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.107 [2024-06-10 11:49:07.134562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.107 [2024-06-10 11:49:07.134574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.107 [2024-06-10 11:49:07.134597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.107 qpair failed and we were unable to recover it. 00:40:42.107 [2024-06-10 11:49:07.144533] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.107 [2024-06-10 11:49:07.144625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.107 [2024-06-10 11:49:07.144647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.107 [2024-06-10 11:49:07.144657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.107 [2024-06-10 11:49:07.144665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.107 [2024-06-10 11:49:07.144683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.107 qpair failed and we were unable to recover it. 00:40:42.107 [2024-06-10 11:49:07.154556] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.107 [2024-06-10 11:49:07.154646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.107 [2024-06-10 11:49:07.154664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.107 [2024-06-10 11:49:07.154673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.107 [2024-06-10 11:49:07.154681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.107 [2024-06-10 11:49:07.154699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.107 qpair failed and we were unable to recover it. 00:40:42.107 [2024-06-10 11:49:07.164538] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.107 [2024-06-10 11:49:07.164626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.107 [2024-06-10 11:49:07.164645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.107 [2024-06-10 11:49:07.164654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.107 [2024-06-10 11:49:07.164662] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.107 [2024-06-10 11:49:07.164680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.107 qpair failed and we were unable to recover it. 00:40:42.107 [2024-06-10 11:49:07.174562] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.107 [2024-06-10 11:49:07.174655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.107 [2024-06-10 11:49:07.174672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.107 [2024-06-10 11:49:07.174682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.107 [2024-06-10 11:49:07.174690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.107 [2024-06-10 11:49:07.174708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.107 qpair failed and we were unable to recover it. 00:40:42.107 [2024-06-10 11:49:07.184580] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.107 [2024-06-10 11:49:07.184670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.107 [2024-06-10 11:49:07.184687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.107 [2024-06-10 11:49:07.184696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.107 [2024-06-10 11:49:07.184704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.107 [2024-06-10 11:49:07.184722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.107 qpair failed and we were unable to recover it. 00:40:42.107 [2024-06-10 11:49:07.194663] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.107 [2024-06-10 11:49:07.194745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.107 [2024-06-10 11:49:07.194762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.107 [2024-06-10 11:49:07.194771] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.107 [2024-06-10 11:49:07.194780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.107 [2024-06-10 11:49:07.194797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.107 qpair failed and we were unable to recover it. 00:40:42.107 [2024-06-10 11:49:07.204729] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.107 [2024-06-10 11:49:07.204828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.107 [2024-06-10 11:49:07.204845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.107 [2024-06-10 11:49:07.204855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.107 [2024-06-10 11:49:07.204863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.107 [2024-06-10 11:49:07.204881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.107 qpair failed and we were unable to recover it. 00:40:42.367 [2024-06-10 11:49:07.214810] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.367 [2024-06-10 11:49:07.214894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.367 [2024-06-10 11:49:07.214913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.367 [2024-06-10 11:49:07.214922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.367 [2024-06-10 11:49:07.214930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.367 [2024-06-10 11:49:07.214948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.367 qpair failed and we were unable to recover it. 00:40:42.367 [2024-06-10 11:49:07.224765] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.367 [2024-06-10 11:49:07.224857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.367 [2024-06-10 11:49:07.224874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.367 [2024-06-10 11:49:07.224886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.367 [2024-06-10 11:49:07.224895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.367 [2024-06-10 11:49:07.224913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.367 qpair failed and we were unable to recover it. 00:40:42.367 [2024-06-10 11:49:07.234795] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.367 [2024-06-10 11:49:07.234880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.367 [2024-06-10 11:49:07.234897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.367 [2024-06-10 11:49:07.234906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.367 [2024-06-10 11:49:07.234915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.367 [2024-06-10 11:49:07.234932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.367 qpair failed and we were unable to recover it. 00:40:42.367 [2024-06-10 11:49:07.244841] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.367 [2024-06-10 11:49:07.244924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.367 [2024-06-10 11:49:07.244941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.367 [2024-06-10 11:49:07.244950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.367 [2024-06-10 11:49:07.244959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.367 [2024-06-10 11:49:07.244976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.367 qpair failed and we were unable to recover it. 00:40:42.367 [2024-06-10 11:49:07.254836] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.367 [2024-06-10 11:49:07.254941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.367 [2024-06-10 11:49:07.254958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.367 [2024-06-10 11:49:07.254967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.367 [2024-06-10 11:49:07.254975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.367 [2024-06-10 11:49:07.254992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.367 qpair failed and we were unable to recover it. 00:40:42.367 [2024-06-10 11:49:07.264894] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.367 [2024-06-10 11:49:07.264984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.367 [2024-06-10 11:49:07.265002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.367 [2024-06-10 11:49:07.265011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.367 [2024-06-10 11:49:07.265019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.367 [2024-06-10 11:49:07.265037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.367 qpair failed and we were unable to recover it. 00:40:42.367 [2024-06-10 11:49:07.274903] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.367 [2024-06-10 11:49:07.274990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.367 [2024-06-10 11:49:07.275007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.367 [2024-06-10 11:49:07.275016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.367 [2024-06-10 11:49:07.275025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.367 [2024-06-10 11:49:07.275042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.367 qpair failed and we were unable to recover it. 00:40:42.367 [2024-06-10 11:49:07.284911] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.367 [2024-06-10 11:49:07.284996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.367 [2024-06-10 11:49:07.285013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.367 [2024-06-10 11:49:07.285022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.367 [2024-06-10 11:49:07.285031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.367 [2024-06-10 11:49:07.285049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.367 qpair failed and we were unable to recover it. 00:40:42.368 [2024-06-10 11:49:07.294950] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.368 [2024-06-10 11:49:07.295034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.368 [2024-06-10 11:49:07.295051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.368 [2024-06-10 11:49:07.295061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.368 [2024-06-10 11:49:07.295070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.368 [2024-06-10 11:49:07.295087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.368 qpair failed and we were unable to recover it. 00:40:42.368 [2024-06-10 11:49:07.304994] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.368 [2024-06-10 11:49:07.305082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.368 [2024-06-10 11:49:07.305100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.368 [2024-06-10 11:49:07.305110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.368 [2024-06-10 11:49:07.305118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.368 [2024-06-10 11:49:07.305136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.368 qpair failed and we were unable to recover it. 00:40:42.368 [2024-06-10 11:49:07.315009] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.368 [2024-06-10 11:49:07.315092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.368 [2024-06-10 11:49:07.315112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.368 [2024-06-10 11:49:07.315121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.368 [2024-06-10 11:49:07.315130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.368 [2024-06-10 11:49:07.315147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.368 qpair failed and we were unable to recover it. 00:40:42.368 [2024-06-10 11:49:07.324956] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.368 [2024-06-10 11:49:07.325045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.368 [2024-06-10 11:49:07.325062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.368 [2024-06-10 11:49:07.325071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.368 [2024-06-10 11:49:07.325079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.368 [2024-06-10 11:49:07.325096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.368 qpair failed and we were unable to recover it. 00:40:42.368 [2024-06-10 11:49:07.335162] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.368 [2024-06-10 11:49:07.335259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.368 [2024-06-10 11:49:07.335277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.368 [2024-06-10 11:49:07.335287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.368 [2024-06-10 11:49:07.335297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.368 [2024-06-10 11:49:07.335315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.368 qpair failed and we were unable to recover it. 00:40:42.368 [2024-06-10 11:49:07.345079] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.368 [2024-06-10 11:49:07.345167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.368 [2024-06-10 11:49:07.345186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.368 [2024-06-10 11:49:07.345195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.368 [2024-06-10 11:49:07.345204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.368 [2024-06-10 11:49:07.345221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.368 qpair failed and we were unable to recover it. 00:40:42.368 [2024-06-10 11:49:07.355147] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.368 [2024-06-10 11:49:07.355232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.368 [2024-06-10 11:49:07.355250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.368 [2024-06-10 11:49:07.355259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.368 [2024-06-10 11:49:07.355268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.368 [2024-06-10 11:49:07.355289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.368 qpair failed and we were unable to recover it. 00:40:42.368 [2024-06-10 11:49:07.365178] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.368 [2024-06-10 11:49:07.365281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.368 [2024-06-10 11:49:07.365300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.368 [2024-06-10 11:49:07.365309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.368 [2024-06-10 11:49:07.365318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.368 [2024-06-10 11:49:07.365335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.368 qpair failed and we were unable to recover it. 00:40:42.368 [2024-06-10 11:49:07.375173] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.368 [2024-06-10 11:49:07.375260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.368 [2024-06-10 11:49:07.375277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.368 [2024-06-10 11:49:07.375286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.368 [2024-06-10 11:49:07.375295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.368 [2024-06-10 11:49:07.375312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.368 qpair failed and we were unable to recover it. 00:40:42.368 [2024-06-10 11:49:07.385131] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.368 [2024-06-10 11:49:07.385220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.368 [2024-06-10 11:49:07.385237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.368 [2024-06-10 11:49:07.385246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.368 [2024-06-10 11:49:07.385254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.368 [2024-06-10 11:49:07.385272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.368 qpair failed and we were unable to recover it. 00:40:42.368 [2024-06-10 11:49:07.395241] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.368 [2024-06-10 11:49:07.395330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.368 [2024-06-10 11:49:07.395348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.368 [2024-06-10 11:49:07.395357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.368 [2024-06-10 11:49:07.395365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.368 [2024-06-10 11:49:07.395383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.368 qpair failed and we were unable to recover it. 00:40:42.368 [2024-06-10 11:49:07.405299] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.368 [2024-06-10 11:49:07.405396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.368 [2024-06-10 11:49:07.405417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.368 [2024-06-10 11:49:07.405426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.368 [2024-06-10 11:49:07.405435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.369 [2024-06-10 11:49:07.405453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.369 qpair failed and we were unable to recover it. 00:40:42.369 [2024-06-10 11:49:07.415299] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.369 [2024-06-10 11:49:07.415386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.369 [2024-06-10 11:49:07.415404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.369 [2024-06-10 11:49:07.415413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.369 [2024-06-10 11:49:07.415421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.369 [2024-06-10 11:49:07.415439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.369 qpair failed and we were unable to recover it. 00:40:42.369 [2024-06-10 11:49:07.425343] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.369 [2024-06-10 11:49:07.425432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.369 [2024-06-10 11:49:07.425449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.369 [2024-06-10 11:49:07.425458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.369 [2024-06-10 11:49:07.425466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.369 [2024-06-10 11:49:07.425484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.369 qpair failed and we were unable to recover it. 00:40:42.369 [2024-06-10 11:49:07.435341] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.369 [2024-06-10 11:49:07.435473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.369 [2024-06-10 11:49:07.435490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.369 [2024-06-10 11:49:07.435500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.369 [2024-06-10 11:49:07.435508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.369 [2024-06-10 11:49:07.435526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.369 qpair failed and we were unable to recover it. 00:40:42.369 [2024-06-10 11:49:07.445399] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.369 [2024-06-10 11:49:07.445487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.369 [2024-06-10 11:49:07.445505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.369 [2024-06-10 11:49:07.445514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.369 [2024-06-10 11:49:07.445526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.369 [2024-06-10 11:49:07.445544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.369 qpair failed and we were unable to recover it. 00:40:42.369 [2024-06-10 11:49:07.455412] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.369 [2024-06-10 11:49:07.455581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.369 [2024-06-10 11:49:07.455598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.369 [2024-06-10 11:49:07.455608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.369 [2024-06-10 11:49:07.455616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.369 [2024-06-10 11:49:07.455635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.369 qpair failed and we were unable to recover it. 00:40:42.369 [2024-06-10 11:49:07.465438] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.369 [2024-06-10 11:49:07.465525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.369 [2024-06-10 11:49:07.465543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.369 [2024-06-10 11:49:07.465552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.369 [2024-06-10 11:49:07.465561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.369 [2024-06-10 11:49:07.465583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.369 qpair failed and we were unable to recover it. 00:40:42.628 [2024-06-10 11:49:07.475457] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.628 [2024-06-10 11:49:07.475543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.628 [2024-06-10 11:49:07.475561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.475573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.475586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.629 [2024-06-10 11:49:07.475605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.629 qpair failed and we were unable to recover it. 00:40:42.629 [2024-06-10 11:49:07.485505] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.629 [2024-06-10 11:49:07.485596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.629 [2024-06-10 11:49:07.485614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.485624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.485632] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.629 [2024-06-10 11:49:07.485650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.629 qpair failed and we were unable to recover it. 00:40:42.629 [2024-06-10 11:49:07.495461] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.629 [2024-06-10 11:49:07.495552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.629 [2024-06-10 11:49:07.495569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.495584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.495592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.629 [2024-06-10 11:49:07.495610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.629 qpair failed and we were unable to recover it. 00:40:42.629 [2024-06-10 11:49:07.505494] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.629 [2024-06-10 11:49:07.505591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.629 [2024-06-10 11:49:07.505609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.505618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.505626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.629 [2024-06-10 11:49:07.505645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.629 qpair failed and we were unable to recover it. 00:40:42.629 [2024-06-10 11:49:07.515580] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.629 [2024-06-10 11:49:07.515665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.629 [2024-06-10 11:49:07.515682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.515692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.515700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.629 [2024-06-10 11:49:07.515718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.629 qpair failed and we were unable to recover it. 00:40:42.629 [2024-06-10 11:49:07.525628] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.629 [2024-06-10 11:49:07.525714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.629 [2024-06-10 11:49:07.525731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.525740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.525749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.629 [2024-06-10 11:49:07.525766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.629 qpair failed and we were unable to recover it. 00:40:42.629 [2024-06-10 11:49:07.535656] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.629 [2024-06-10 11:49:07.535747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.629 [2024-06-10 11:49:07.535765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.535774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.535785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.629 [2024-06-10 11:49:07.535803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.629 qpair failed and we were unable to recover it. 00:40:42.629 [2024-06-10 11:49:07.545704] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.629 [2024-06-10 11:49:07.545788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.629 [2024-06-10 11:49:07.545805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.545814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.545823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.629 [2024-06-10 11:49:07.545840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.629 qpair failed and we were unable to recover it. 00:40:42.629 [2024-06-10 11:49:07.555728] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.629 [2024-06-10 11:49:07.555837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.629 [2024-06-10 11:49:07.555854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.555864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.555873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.629 [2024-06-10 11:49:07.555891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.629 qpair failed and we were unable to recover it. 00:40:42.629 [2024-06-10 11:49:07.565757] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.629 [2024-06-10 11:49:07.565841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.629 [2024-06-10 11:49:07.565858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.565868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.565876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.629 [2024-06-10 11:49:07.565894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.629 qpair failed and we were unable to recover it. 00:40:42.629 [2024-06-10 11:49:07.575788] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.629 [2024-06-10 11:49:07.575874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.629 [2024-06-10 11:49:07.575892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.575901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.575910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.629 [2024-06-10 11:49:07.575927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.629 qpair failed and we were unable to recover it. 00:40:42.629 [2024-06-10 11:49:07.585813] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.629 [2024-06-10 11:49:07.585899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.629 [2024-06-10 11:49:07.585917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.585926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.585934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.629 [2024-06-10 11:49:07.585952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.629 qpair failed and we were unable to recover it. 00:40:42.629 [2024-06-10 11:49:07.595853] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.629 [2024-06-10 11:49:07.595951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.629 [2024-06-10 11:49:07.595969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.595978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.595987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.629 [2024-06-10 11:49:07.596005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.629 qpair failed and we were unable to recover it. 00:40:42.629 [2024-06-10 11:49:07.605858] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.629 [2024-06-10 11:49:07.605946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.629 [2024-06-10 11:49:07.605963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.629 [2024-06-10 11:49:07.605972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.629 [2024-06-10 11:49:07.605981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.630 [2024-06-10 11:49:07.605999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.630 qpair failed and we were unable to recover it. 00:40:42.630 [2024-06-10 11:49:07.615908] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.630 [2024-06-10 11:49:07.615991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.630 [2024-06-10 11:49:07.616008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.630 [2024-06-10 11:49:07.616017] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.630 [2024-06-10 11:49:07.616026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.630 [2024-06-10 11:49:07.616043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.630 qpair failed and we were unable to recover it. 00:40:42.630 [2024-06-10 11:49:07.625938] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.630 [2024-06-10 11:49:07.626023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.630 [2024-06-10 11:49:07.626040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.630 [2024-06-10 11:49:07.626052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.630 [2024-06-10 11:49:07.626061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.630 [2024-06-10 11:49:07.626078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.630 qpair failed and we were unable to recover it. 00:40:42.630 [2024-06-10 11:49:07.635948] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.630 [2024-06-10 11:49:07.636143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.630 [2024-06-10 11:49:07.636161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.630 [2024-06-10 11:49:07.636170] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.630 [2024-06-10 11:49:07.636179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.630 [2024-06-10 11:49:07.636197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.630 qpair failed and we were unable to recover it. 00:40:42.630 [2024-06-10 11:49:07.645991] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.630 [2024-06-10 11:49:07.646071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.630 [2024-06-10 11:49:07.646089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.630 [2024-06-10 11:49:07.646098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.630 [2024-06-10 11:49:07.646106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.630 [2024-06-10 11:49:07.646124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.630 qpair failed and we were unable to recover it. 00:40:42.630 [2024-06-10 11:49:07.655995] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.630 [2024-06-10 11:49:07.656080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.630 [2024-06-10 11:49:07.656098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.630 [2024-06-10 11:49:07.656107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.630 [2024-06-10 11:49:07.656116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.630 [2024-06-10 11:49:07.656133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.630 qpair failed and we were unable to recover it. 00:40:42.630 [2024-06-10 11:49:07.666055] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.630 [2024-06-10 11:49:07.666154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.630 [2024-06-10 11:49:07.666172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.630 [2024-06-10 11:49:07.666181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.630 [2024-06-10 11:49:07.666189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.630 [2024-06-10 11:49:07.666207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.630 qpair failed and we were unable to recover it. 00:40:42.630 [2024-06-10 11:49:07.676062] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.630 [2024-06-10 11:49:07.676144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.630 [2024-06-10 11:49:07.676162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.630 [2024-06-10 11:49:07.676171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.630 [2024-06-10 11:49:07.676179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.630 [2024-06-10 11:49:07.676197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.630 qpair failed and we were unable to recover it. 00:40:42.630 [2024-06-10 11:49:07.686116] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.630 [2024-06-10 11:49:07.686196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.630 [2024-06-10 11:49:07.686213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.630 [2024-06-10 11:49:07.686223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.630 [2024-06-10 11:49:07.686231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.630 [2024-06-10 11:49:07.686248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.630 qpair failed and we were unable to recover it. 00:40:42.630 [2024-06-10 11:49:07.696123] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.630 [2024-06-10 11:49:07.696213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.630 [2024-06-10 11:49:07.696230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.630 [2024-06-10 11:49:07.696240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.630 [2024-06-10 11:49:07.696248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.630 [2024-06-10 11:49:07.696266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.630 qpair failed and we were unable to recover it. 00:40:42.630 [2024-06-10 11:49:07.706160] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.630 [2024-06-10 11:49:07.706251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.630 [2024-06-10 11:49:07.706268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.630 [2024-06-10 11:49:07.706277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.630 [2024-06-10 11:49:07.706286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.630 [2024-06-10 11:49:07.706303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.630 qpair failed and we were unable to recover it. 00:40:42.630 [2024-06-10 11:49:07.716188] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.630 [2024-06-10 11:49:07.716270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.630 [2024-06-10 11:49:07.716292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.630 [2024-06-10 11:49:07.716301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.630 [2024-06-10 11:49:07.716310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.630 [2024-06-10 11:49:07.716329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.630 qpair failed and we were unable to recover it. 00:40:42.630 [2024-06-10 11:49:07.726209] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.630 [2024-06-10 11:49:07.726321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.630 [2024-06-10 11:49:07.726339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.630 [2024-06-10 11:49:07.726348] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.630 [2024-06-10 11:49:07.726357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.630 [2024-06-10 11:49:07.726374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.630 qpair failed and we were unable to recover it. 00:40:42.890 [2024-06-10 11:49:07.736241] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.890 [2024-06-10 11:49:07.736335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.890 [2024-06-10 11:49:07.736353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.890 [2024-06-10 11:49:07.736362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.890 [2024-06-10 11:49:07.736371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.890 [2024-06-10 11:49:07.736390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.890 qpair failed and we were unable to recover it. 00:40:42.890 [2024-06-10 11:49:07.746307] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.891 [2024-06-10 11:49:07.746430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.891 [2024-06-10 11:49:07.746448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.891 [2024-06-10 11:49:07.746457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.891 [2024-06-10 11:49:07.746466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.891 [2024-06-10 11:49:07.746484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.891 qpair failed and we were unable to recover it. 00:40:42.891 [2024-06-10 11:49:07.756303] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.891 [2024-06-10 11:49:07.756388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.891 [2024-06-10 11:49:07.756406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.891 [2024-06-10 11:49:07.756415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.891 [2024-06-10 11:49:07.756423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.891 [2024-06-10 11:49:07.756444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.891 qpair failed and we were unable to recover it. 00:40:42.891 [2024-06-10 11:49:07.766353] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.891 [2024-06-10 11:49:07.766439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.891 [2024-06-10 11:49:07.766457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.891 [2024-06-10 11:49:07.766466] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.891 [2024-06-10 11:49:07.766475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.891 [2024-06-10 11:49:07.766492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.891 qpair failed and we were unable to recover it. 00:40:42.891 [2024-06-10 11:49:07.776359] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.891 [2024-06-10 11:49:07.776445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.891 [2024-06-10 11:49:07.776463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.891 [2024-06-10 11:49:07.776474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.891 [2024-06-10 11:49:07.776482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.891 [2024-06-10 11:49:07.776500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.891 qpair failed and we were unable to recover it. 00:40:42.891 [2024-06-10 11:49:07.786301] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.891 [2024-06-10 11:49:07.786401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.891 [2024-06-10 11:49:07.786418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.891 [2024-06-10 11:49:07.786428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.891 [2024-06-10 11:49:07.786436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.891 [2024-06-10 11:49:07.786455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.891 qpair failed and we were unable to recover it. 00:40:42.891 [2024-06-10 11:49:07.796369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.891 [2024-06-10 11:49:07.796455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.891 [2024-06-10 11:49:07.796472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.891 [2024-06-10 11:49:07.796481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.891 [2024-06-10 11:49:07.796490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.891 [2024-06-10 11:49:07.796508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.891 qpair failed and we were unable to recover it. 00:40:42.891 [2024-06-10 11:49:07.806446] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.891 [2024-06-10 11:49:07.806527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.891 [2024-06-10 11:49:07.806548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.891 [2024-06-10 11:49:07.806558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.891 [2024-06-10 11:49:07.806566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.891 [2024-06-10 11:49:07.806590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.891 qpair failed and we were unable to recover it. 00:40:42.891 [2024-06-10 11:49:07.816461] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.891 [2024-06-10 11:49:07.816547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.891 [2024-06-10 11:49:07.816564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.891 [2024-06-10 11:49:07.816573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.891 [2024-06-10 11:49:07.816588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.891 [2024-06-10 11:49:07.816605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.891 qpair failed and we were unable to recover it. 00:40:42.891 [2024-06-10 11:49:07.826504] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.891 [2024-06-10 11:49:07.826593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.891 [2024-06-10 11:49:07.826612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.891 [2024-06-10 11:49:07.826621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.891 [2024-06-10 11:49:07.826630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.891 [2024-06-10 11:49:07.826650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.891 qpair failed and we were unable to recover it. 00:40:42.891 [2024-06-10 11:49:07.836525] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.891 [2024-06-10 11:49:07.836609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.891 [2024-06-10 11:49:07.836627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.891 [2024-06-10 11:49:07.836636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.891 [2024-06-10 11:49:07.836645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.891 [2024-06-10 11:49:07.836663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.891 qpair failed and we were unable to recover it. 00:40:42.891 [2024-06-10 11:49:07.846583] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.891 [2024-06-10 11:49:07.846678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.891 [2024-06-10 11:49:07.846695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.891 [2024-06-10 11:49:07.846705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.891 [2024-06-10 11:49:07.846713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.891 [2024-06-10 11:49:07.846734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.891 qpair failed and we were unable to recover it. 00:40:42.891 [2024-06-10 11:49:07.856616] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.891 [2024-06-10 11:49:07.856737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.891 [2024-06-10 11:49:07.856754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.891 [2024-06-10 11:49:07.856763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.891 [2024-06-10 11:49:07.856772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.891 [2024-06-10 11:49:07.856789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.891 qpair failed and we were unable to recover it. 00:40:42.891 [2024-06-10 11:49:07.866622] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.891 [2024-06-10 11:49:07.866710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.891 [2024-06-10 11:49:07.866728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.892 [2024-06-10 11:49:07.866737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.892 [2024-06-10 11:49:07.866746] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.892 [2024-06-10 11:49:07.866764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.892 qpair failed and we were unable to recover it. 00:40:42.892 [2024-06-10 11:49:07.876647] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.892 [2024-06-10 11:49:07.876732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.892 [2024-06-10 11:49:07.876749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.892 [2024-06-10 11:49:07.876759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.892 [2024-06-10 11:49:07.876767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.892 [2024-06-10 11:49:07.876784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.892 qpair failed and we were unable to recover it. 00:40:42.892 [2024-06-10 11:49:07.886669] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.892 [2024-06-10 11:49:07.886749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.892 [2024-06-10 11:49:07.886766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.892 [2024-06-10 11:49:07.886776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.892 [2024-06-10 11:49:07.886784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.892 [2024-06-10 11:49:07.886802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.892 qpair failed and we were unable to recover it. 00:40:42.892 [2024-06-10 11:49:07.896699] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.892 [2024-06-10 11:49:07.896791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.892 [2024-06-10 11:49:07.896809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.892 [2024-06-10 11:49:07.896818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.892 [2024-06-10 11:49:07.896826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.892 [2024-06-10 11:49:07.896844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.892 qpair failed and we were unable to recover it. 00:40:42.892 [2024-06-10 11:49:07.906730] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.892 [2024-06-10 11:49:07.906819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.892 [2024-06-10 11:49:07.906837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.892 [2024-06-10 11:49:07.906846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.892 [2024-06-10 11:49:07.906854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.892 [2024-06-10 11:49:07.906872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.892 qpair failed and we were unable to recover it. 00:40:42.892 [2024-06-10 11:49:07.916764] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.892 [2024-06-10 11:49:07.916852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.892 [2024-06-10 11:49:07.916869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.892 [2024-06-10 11:49:07.916878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.892 [2024-06-10 11:49:07.916887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.892 [2024-06-10 11:49:07.916905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.892 qpair failed and we were unable to recover it. 00:40:42.892 [2024-06-10 11:49:07.926815] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.892 [2024-06-10 11:49:07.926896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.892 [2024-06-10 11:49:07.926913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.892 [2024-06-10 11:49:07.926922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.892 [2024-06-10 11:49:07.926931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.892 [2024-06-10 11:49:07.926948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.892 qpair failed and we were unable to recover it. 00:40:42.892 [2024-06-10 11:49:07.936766] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.892 [2024-06-10 11:49:07.936856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.892 [2024-06-10 11:49:07.936872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.892 [2024-06-10 11:49:07.936882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.892 [2024-06-10 11:49:07.936893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.892 [2024-06-10 11:49:07.936911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.892 qpair failed and we were unable to recover it. 00:40:42.892 [2024-06-10 11:49:07.946832] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.892 [2024-06-10 11:49:07.946918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.892 [2024-06-10 11:49:07.946935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.892 [2024-06-10 11:49:07.946945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.892 [2024-06-10 11:49:07.946953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.892 [2024-06-10 11:49:07.946971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.892 qpair failed and we were unable to recover it. 00:40:42.892 [2024-06-10 11:49:07.956871] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.892 [2024-06-10 11:49:07.956953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.892 [2024-06-10 11:49:07.956970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.892 [2024-06-10 11:49:07.956980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.892 [2024-06-10 11:49:07.956988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.892 [2024-06-10 11:49:07.957005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.892 qpair failed and we were unable to recover it. 00:40:42.892 [2024-06-10 11:49:07.966917] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.892 [2024-06-10 11:49:07.967003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.892 [2024-06-10 11:49:07.967020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.892 [2024-06-10 11:49:07.967030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.892 [2024-06-10 11:49:07.967038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.892 [2024-06-10 11:49:07.967055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.892 qpair failed and we were unable to recover it. 00:40:42.892 [2024-06-10 11:49:07.976964] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.892 [2024-06-10 11:49:07.977084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.892 [2024-06-10 11:49:07.977101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.892 [2024-06-10 11:49:07.977110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.892 [2024-06-10 11:49:07.977118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.892 [2024-06-10 11:49:07.977136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.892 qpair failed and we were unable to recover it. 00:40:42.892 [2024-06-10 11:49:07.986953] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:42.892 [2024-06-10 11:49:07.987042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:42.892 [2024-06-10 11:49:07.987060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:42.892 [2024-06-10 11:49:07.987069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:42.892 [2024-06-10 11:49:07.987077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:42.892 [2024-06-10 11:49:07.987095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:42.892 qpair failed and we were unable to recover it. 00:40:43.152 [2024-06-10 11:49:07.996987] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.152 [2024-06-10 11:49:07.997073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.152 [2024-06-10 11:49:07.997092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.152 [2024-06-10 11:49:07.997102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.152 [2024-06-10 11:49:07.997110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.152 [2024-06-10 11:49:07.997129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.152 qpair failed and we were unable to recover it. 00:40:43.152 [2024-06-10 11:49:08.006968] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.152 [2024-06-10 11:49:08.007055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.152 [2024-06-10 11:49:08.007073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.152 [2024-06-10 11:49:08.007083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.152 [2024-06-10 11:49:08.007091] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.152 [2024-06-10 11:49:08.007109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.152 qpair failed and we were unable to recover it. 00:40:43.152 [2024-06-10 11:49:08.017049] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.152 [2024-06-10 11:49:08.017141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.152 [2024-06-10 11:49:08.017158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.152 [2024-06-10 11:49:08.017168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.152 [2024-06-10 11:49:08.017176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.152 [2024-06-10 11:49:08.017194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.152 qpair failed and we were unable to recover it. 00:40:43.152 [2024-06-10 11:49:08.027098] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.152 [2024-06-10 11:49:08.027189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.152 [2024-06-10 11:49:08.027207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.152 [2024-06-10 11:49:08.027219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.152 [2024-06-10 11:49:08.027228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.152 [2024-06-10 11:49:08.027246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.152 qpair failed and we were unable to recover it. 00:40:43.152 [2024-06-10 11:49:08.037113] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.152 [2024-06-10 11:49:08.037200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.152 [2024-06-10 11:49:08.037218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.152 [2024-06-10 11:49:08.037227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.152 [2024-06-10 11:49:08.037235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.152 [2024-06-10 11:49:08.037253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.152 qpair failed and we were unable to recover it. 00:40:43.152 [2024-06-10 11:49:08.047141] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.152 [2024-06-10 11:49:08.047223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.152 [2024-06-10 11:49:08.047240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.152 [2024-06-10 11:49:08.047249] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.152 [2024-06-10 11:49:08.047257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.152 [2024-06-10 11:49:08.047275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.152 qpair failed and we were unable to recover it. 00:40:43.152 [2024-06-10 11:49:08.057160] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.152 [2024-06-10 11:49:08.057246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.152 [2024-06-10 11:49:08.057264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.152 [2024-06-10 11:49:08.057273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.152 [2024-06-10 11:49:08.057281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.152 [2024-06-10 11:49:08.057299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.152 qpair failed and we were unable to recover it. 00:40:43.152 [2024-06-10 11:49:08.067205] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.152 [2024-06-10 11:49:08.067291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.152 [2024-06-10 11:49:08.067309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.152 [2024-06-10 11:49:08.067318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.152 [2024-06-10 11:49:08.067327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.152 [2024-06-10 11:49:08.067344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.152 qpair failed and we were unable to recover it. 00:40:43.152 [2024-06-10 11:49:08.077224] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.152 [2024-06-10 11:49:08.077328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.152 [2024-06-10 11:49:08.077345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.152 [2024-06-10 11:49:08.077355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.152 [2024-06-10 11:49:08.077363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.152 [2024-06-10 11:49:08.077381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.152 qpair failed and we were unable to recover it. 00:40:43.152 [2024-06-10 11:49:08.087247] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.152 [2024-06-10 11:49:08.087367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.152 [2024-06-10 11:49:08.087385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.152 [2024-06-10 11:49:08.087395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.152 [2024-06-10 11:49:08.087403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.152 [2024-06-10 11:49:08.087421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.152 qpair failed and we were unable to recover it. 00:40:43.152 [2024-06-10 11:49:08.097283] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.152 [2024-06-10 11:49:08.097450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.152 [2024-06-10 11:49:08.097467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.152 [2024-06-10 11:49:08.097476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.152 [2024-06-10 11:49:08.097485] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.152 [2024-06-10 11:49:08.097503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.152 qpair failed and we were unable to recover it. 00:40:43.152 [2024-06-10 11:49:08.107304] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.107389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.153 [2024-06-10 11:49:08.107406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.153 [2024-06-10 11:49:08.107415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.153 [2024-06-10 11:49:08.107424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.153 [2024-06-10 11:49:08.107441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.153 qpair failed and we were unable to recover it. 00:40:43.153 [2024-06-10 11:49:08.117353] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.117439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.153 [2024-06-10 11:49:08.117456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.153 [2024-06-10 11:49:08.117468] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.153 [2024-06-10 11:49:08.117477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.153 [2024-06-10 11:49:08.117494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.153 qpair failed and we were unable to recover it. 00:40:43.153 [2024-06-10 11:49:08.127432] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.127555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.153 [2024-06-10 11:49:08.127572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.153 [2024-06-10 11:49:08.127587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.153 [2024-06-10 11:49:08.127595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.153 [2024-06-10 11:49:08.127613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.153 qpair failed and we were unable to recover it. 00:40:43.153 [2024-06-10 11:49:08.137505] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.137598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.153 [2024-06-10 11:49:08.137616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.153 [2024-06-10 11:49:08.137625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.153 [2024-06-10 11:49:08.137633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.153 [2024-06-10 11:49:08.137651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.153 qpair failed and we were unable to recover it. 00:40:43.153 [2024-06-10 11:49:08.147440] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.147526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.153 [2024-06-10 11:49:08.147543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.153 [2024-06-10 11:49:08.147553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.153 [2024-06-10 11:49:08.147562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.153 [2024-06-10 11:49:08.147584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.153 qpair failed and we were unable to recover it. 00:40:43.153 [2024-06-10 11:49:08.157459] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.157547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.153 [2024-06-10 11:49:08.157565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.153 [2024-06-10 11:49:08.157574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.153 [2024-06-10 11:49:08.157588] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.153 [2024-06-10 11:49:08.157606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.153 qpair failed and we were unable to recover it. 00:40:43.153 [2024-06-10 11:49:08.167474] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.167617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.153 [2024-06-10 11:49:08.167635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.153 [2024-06-10 11:49:08.167644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.153 [2024-06-10 11:49:08.167653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.153 [2024-06-10 11:49:08.167671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.153 qpair failed and we were unable to recover it. 00:40:43.153 [2024-06-10 11:49:08.177462] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.177553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.153 [2024-06-10 11:49:08.177571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.153 [2024-06-10 11:49:08.177585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.153 [2024-06-10 11:49:08.177593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.153 [2024-06-10 11:49:08.177611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.153 qpair failed and we were unable to recover it. 00:40:43.153 [2024-06-10 11:49:08.187561] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.187655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.153 [2024-06-10 11:49:08.187672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.153 [2024-06-10 11:49:08.187682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.153 [2024-06-10 11:49:08.187690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.153 [2024-06-10 11:49:08.187709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.153 qpair failed and we were unable to recover it. 00:40:43.153 [2024-06-10 11:49:08.197620] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.197719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.153 [2024-06-10 11:49:08.197737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.153 [2024-06-10 11:49:08.197746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.153 [2024-06-10 11:49:08.197755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.153 [2024-06-10 11:49:08.197772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.153 qpair failed and we were unable to recover it. 00:40:43.153 [2024-06-10 11:49:08.207534] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.207667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.153 [2024-06-10 11:49:08.207690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.153 [2024-06-10 11:49:08.207700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.153 [2024-06-10 11:49:08.207708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.153 [2024-06-10 11:49:08.207727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.153 qpair failed and we were unable to recover it. 00:40:43.153 [2024-06-10 11:49:08.217647] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.217732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.153 [2024-06-10 11:49:08.217750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.153 [2024-06-10 11:49:08.217759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.153 [2024-06-10 11:49:08.217767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.153 [2024-06-10 11:49:08.217785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.153 qpair failed and we were unable to recover it. 00:40:43.153 [2024-06-10 11:49:08.227672] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.227758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.153 [2024-06-10 11:49:08.227775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.153 [2024-06-10 11:49:08.227785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.153 [2024-06-10 11:49:08.227793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.153 [2024-06-10 11:49:08.227811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.153 qpair failed and we were unable to recover it. 00:40:43.153 [2024-06-10 11:49:08.237702] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.153 [2024-06-10 11:49:08.237802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.154 [2024-06-10 11:49:08.237819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.154 [2024-06-10 11:49:08.237829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.154 [2024-06-10 11:49:08.237838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.154 [2024-06-10 11:49:08.237855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.154 qpair failed and we were unable to recover it. 00:40:43.154 [2024-06-10 11:49:08.247662] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.154 [2024-06-10 11:49:08.247747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.154 [2024-06-10 11:49:08.247765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.154 [2024-06-10 11:49:08.247774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.154 [2024-06-10 11:49:08.247782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.154 [2024-06-10 11:49:08.247803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.154 qpair failed and we were unable to recover it. 00:40:43.413 [2024-06-10 11:49:08.257739] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.413 [2024-06-10 11:49:08.257835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.413 [2024-06-10 11:49:08.257854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.413 [2024-06-10 11:49:08.257863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.413 [2024-06-10 11:49:08.257872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.413 [2024-06-10 11:49:08.257890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.413 qpair failed and we were unable to recover it. 00:40:43.413 [2024-06-10 11:49:08.267809] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.413 [2024-06-10 11:49:08.267896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.413 [2024-06-10 11:49:08.267913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.413 [2024-06-10 11:49:08.267923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.413 [2024-06-10 11:49:08.267932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.413 [2024-06-10 11:49:08.267950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.413 qpair failed and we were unable to recover it. 00:40:43.413 [2024-06-10 11:49:08.277812] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.413 [2024-06-10 11:49:08.277901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.413 [2024-06-10 11:49:08.277918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.413 [2024-06-10 11:49:08.277928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.413 [2024-06-10 11:49:08.277936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.413 [2024-06-10 11:49:08.277954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.413 qpair failed and we were unable to recover it. 00:40:43.413 [2024-06-10 11:49:08.287886] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.413 [2024-06-10 11:49:08.287988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.413 [2024-06-10 11:49:08.288005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.413 [2024-06-10 11:49:08.288014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.413 [2024-06-10 11:49:08.288022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.413 [2024-06-10 11:49:08.288040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.413 qpair failed and we were unable to recover it. 00:40:43.413 [2024-06-10 11:49:08.297885] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.413 [2024-06-10 11:49:08.297973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.413 [2024-06-10 11:49:08.297993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.413 [2024-06-10 11:49:08.298003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.413 [2024-06-10 11:49:08.298011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.413 [2024-06-10 11:49:08.298029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.413 qpair failed and we were unable to recover it. 00:40:43.413 [2024-06-10 11:49:08.307923] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.413 [2024-06-10 11:49:08.308012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.413 [2024-06-10 11:49:08.308029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.413 [2024-06-10 11:49:08.308039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.413 [2024-06-10 11:49:08.308047] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.413 [2024-06-10 11:49:08.308064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.413 qpair failed and we were unable to recover it. 00:40:43.413 [2024-06-10 11:49:08.317932] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.413 [2024-06-10 11:49:08.318025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.413 [2024-06-10 11:49:08.318042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.413 [2024-06-10 11:49:08.318051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.413 [2024-06-10 11:49:08.318060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.413 [2024-06-10 11:49:08.318077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.413 qpair failed and we were unable to recover it. 00:40:43.413 [2024-06-10 11:49:08.328018] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.413 [2024-06-10 11:49:08.328105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.413 [2024-06-10 11:49:08.328122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.413 [2024-06-10 11:49:08.328131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.413 [2024-06-10 11:49:08.328140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.413 [2024-06-10 11:49:08.328158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.413 qpair failed and we were unable to recover it. 00:40:43.413 [2024-06-10 11:49:08.338003] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.413 [2024-06-10 11:49:08.338094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.413 [2024-06-10 11:49:08.338111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.413 [2024-06-10 11:49:08.338120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.413 [2024-06-10 11:49:08.338133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.413 [2024-06-10 11:49:08.338152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.413 qpair failed and we were unable to recover it. 00:40:43.413 [2024-06-10 11:49:08.348039] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.413 [2024-06-10 11:49:08.348124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.413 [2024-06-10 11:49:08.348142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.413 [2024-06-10 11:49:08.348151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.414 [2024-06-10 11:49:08.348160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.414 [2024-06-10 11:49:08.348177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.414 qpair failed and we were unable to recover it. 00:40:43.414 [2024-06-10 11:49:08.358100] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.414 [2024-06-10 11:49:08.358187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.414 [2024-06-10 11:49:08.358204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.414 [2024-06-10 11:49:08.358214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.414 [2024-06-10 11:49:08.358222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.414 [2024-06-10 11:49:08.358239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.414 qpair failed and we were unable to recover it. 00:40:43.414 [2024-06-10 11:49:08.368113] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.414 [2024-06-10 11:49:08.368195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.414 [2024-06-10 11:49:08.368213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.414 [2024-06-10 11:49:08.368222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.414 [2024-06-10 11:49:08.368230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.414 [2024-06-10 11:49:08.368248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.414 qpair failed and we were unable to recover it. 00:40:43.414 [2024-06-10 11:49:08.378126] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.414 [2024-06-10 11:49:08.378243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.414 [2024-06-10 11:49:08.378259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.414 [2024-06-10 11:49:08.378269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.414 [2024-06-10 11:49:08.378277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.414 [2024-06-10 11:49:08.378295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.414 qpair failed and we were unable to recover it. 00:40:43.414 [2024-06-10 11:49:08.388080] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.414 [2024-06-10 11:49:08.388180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.414 [2024-06-10 11:49:08.388198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.414 [2024-06-10 11:49:08.388207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.414 [2024-06-10 11:49:08.388215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.414 [2024-06-10 11:49:08.388233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.414 qpair failed and we were unable to recover it. 00:40:43.414 [2024-06-10 11:49:08.398214] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.414 [2024-06-10 11:49:08.398301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.414 [2024-06-10 11:49:08.398319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.414 [2024-06-10 11:49:08.398328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.414 [2024-06-10 11:49:08.398337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.414 [2024-06-10 11:49:08.398354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.414 qpair failed and we were unable to recover it. 00:40:43.414 [2024-06-10 11:49:08.408196] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.414 [2024-06-10 11:49:08.408284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.414 [2024-06-10 11:49:08.408302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.414 [2024-06-10 11:49:08.408312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.414 [2024-06-10 11:49:08.408320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.414 [2024-06-10 11:49:08.408337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.414 qpair failed and we were unable to recover it. 00:40:43.414 [2024-06-10 11:49:08.418238] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.414 [2024-06-10 11:49:08.418323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.414 [2024-06-10 11:49:08.418341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.414 [2024-06-10 11:49:08.418351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.414 [2024-06-10 11:49:08.418359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.414 [2024-06-10 11:49:08.418377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.414 qpair failed and we were unable to recover it. 00:40:43.414 [2024-06-10 11:49:08.428271] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.414 [2024-06-10 11:49:08.428360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.414 [2024-06-10 11:49:08.428378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.414 [2024-06-10 11:49:08.428391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.414 [2024-06-10 11:49:08.428400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.414 [2024-06-10 11:49:08.428418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.414 qpair failed and we were unable to recover it. 00:40:43.414 [2024-06-10 11:49:08.438236] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.414 [2024-06-10 11:49:08.438329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.414 [2024-06-10 11:49:08.438347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.414 [2024-06-10 11:49:08.438356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.414 [2024-06-10 11:49:08.438364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.414 [2024-06-10 11:49:08.438382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.414 qpair failed and we were unable to recover it. 00:40:43.414 [2024-06-10 11:49:08.448355] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.414 [2024-06-10 11:49:08.448463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.414 [2024-06-10 11:49:08.448481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.414 [2024-06-10 11:49:08.448491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.414 [2024-06-10 11:49:08.448500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.414 [2024-06-10 11:49:08.448518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.414 qpair failed and we were unable to recover it. 00:40:43.414 [2024-06-10 11:49:08.458303] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.414 [2024-06-10 11:49:08.458409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.414 [2024-06-10 11:49:08.458426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.414 [2024-06-10 11:49:08.458435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.414 [2024-06-10 11:49:08.458443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.415 [2024-06-10 11:49:08.458461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.415 qpair failed and we were unable to recover it. 00:40:43.415 [2024-06-10 11:49:08.468393] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.415 [2024-06-10 11:49:08.468478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.415 [2024-06-10 11:49:08.468496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.415 [2024-06-10 11:49:08.468505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.415 [2024-06-10 11:49:08.468513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.415 [2024-06-10 11:49:08.468531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.415 qpair failed and we were unable to recover it. 00:40:43.415 [2024-06-10 11:49:08.478403] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.415 [2024-06-10 11:49:08.478485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.415 [2024-06-10 11:49:08.478503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.415 [2024-06-10 11:49:08.478512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.415 [2024-06-10 11:49:08.478520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.415 [2024-06-10 11:49:08.478538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.415 qpair failed and we were unable to recover it. 00:40:43.415 [2024-06-10 11:49:08.488400] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.415 [2024-06-10 11:49:08.488489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.415 [2024-06-10 11:49:08.488506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.415 [2024-06-10 11:49:08.488515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.415 [2024-06-10 11:49:08.488524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.415 [2024-06-10 11:49:08.488542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.415 qpair failed and we were unable to recover it. 00:40:43.415 [2024-06-10 11:49:08.498413] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.415 [2024-06-10 11:49:08.498499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.415 [2024-06-10 11:49:08.498517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.415 [2024-06-10 11:49:08.498527] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.415 [2024-06-10 11:49:08.498535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.415 [2024-06-10 11:49:08.498553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.415 qpair failed and we were unable to recover it. 00:40:43.415 [2024-06-10 11:49:08.508439] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.415 [2024-06-10 11:49:08.508530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.415 [2024-06-10 11:49:08.508548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.415 [2024-06-10 11:49:08.508557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.415 [2024-06-10 11:49:08.508565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.415 [2024-06-10 11:49:08.508588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.415 qpair failed and we were unable to recover it. 00:40:43.673 [2024-06-10 11:49:08.518463] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.673 [2024-06-10 11:49:08.518549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.518567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.518589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.518598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.518617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.528572] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.528671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.528690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.528700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.528709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.528727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.538614] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.538701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.538719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.538728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.538737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.538755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.548652] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.548757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.548774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.548783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.548791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.548809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.558646] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.558731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.558749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.558758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.558767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.558784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.568604] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.568686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.568704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.568714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.568722] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.568740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.578646] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.578732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.578749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.578758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.578767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.578784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.588740] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.588824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.588842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.588852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.588860] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.588878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.598689] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.598770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.598788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.598797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.598805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.598823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.608785] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.608867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.608888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.608897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.608906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.608923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.618770] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.618856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.618873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.618883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.618892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.618910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.628782] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.628867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.628884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.628893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.628902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.628920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.638889] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.638972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.638989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.638998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.639007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.639024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.648907] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.648986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.649003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.649013] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.649021] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.649042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.658963] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.659084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.659102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.659111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.659119] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.659137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.669012] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.669120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.669137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.669146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.669154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.669171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.678993] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.679079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.679096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.679105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.679113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.679130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.689075] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.689163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.689180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.689189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.689197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.689214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.699079] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.674 [2024-06-10 11:49:08.699196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.674 [2024-06-10 11:49:08.699216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.674 [2024-06-10 11:49:08.699226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.674 [2024-06-10 11:49:08.699234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.674 [2024-06-10 11:49:08.699252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.674 qpair failed and we were unable to recover it. 00:40:43.674 [2024-06-10 11:49:08.709013] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.675 [2024-06-10 11:49:08.709104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.675 [2024-06-10 11:49:08.709121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.675 [2024-06-10 11:49:08.709130] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.675 [2024-06-10 11:49:08.709139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.675 [2024-06-10 11:49:08.709156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.675 qpair failed and we were unable to recover it. 00:40:43.675 [2024-06-10 11:49:08.719149] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.675 [2024-06-10 11:49:08.719234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.675 [2024-06-10 11:49:08.719251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.675 [2024-06-10 11:49:08.719260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.675 [2024-06-10 11:49:08.719268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.675 [2024-06-10 11:49:08.719286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.675 qpair failed and we were unable to recover it. 00:40:43.675 [2024-06-10 11:49:08.729139] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.675 [2024-06-10 11:49:08.729220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.675 [2024-06-10 11:49:08.729237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.675 [2024-06-10 11:49:08.729247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.675 [2024-06-10 11:49:08.729255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.675 [2024-06-10 11:49:08.729272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.675 qpair failed and we were unable to recover it. 00:40:43.675 [2024-06-10 11:49:08.739166] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.675 [2024-06-10 11:49:08.739255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.675 [2024-06-10 11:49:08.739272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.675 [2024-06-10 11:49:08.739281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.675 [2024-06-10 11:49:08.739292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.675 [2024-06-10 11:49:08.739310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.675 qpair failed and we were unable to recover it. 00:40:43.675 [2024-06-10 11:49:08.749200] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.675 [2024-06-10 11:49:08.749291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.675 [2024-06-10 11:49:08.749309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.675 [2024-06-10 11:49:08.749318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.675 [2024-06-10 11:49:08.749327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.675 [2024-06-10 11:49:08.749344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.675 qpair failed and we were unable to recover it. 00:40:43.675 [2024-06-10 11:49:08.759216] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.675 [2024-06-10 11:49:08.759331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.675 [2024-06-10 11:49:08.759348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.675 [2024-06-10 11:49:08.759357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.675 [2024-06-10 11:49:08.759366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.675 [2024-06-10 11:49:08.759383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.675 qpair failed and we were unable to recover it. 00:40:43.675 [2024-06-10 11:49:08.769269] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.675 [2024-06-10 11:49:08.769353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.675 [2024-06-10 11:49:08.769371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.675 [2024-06-10 11:49:08.769380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.675 [2024-06-10 11:49:08.769389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.675 [2024-06-10 11:49:08.769407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.675 qpair failed and we were unable to recover it. 00:40:43.934 [2024-06-10 11:49:08.779268] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.934 [2024-06-10 11:49:08.779356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.934 [2024-06-10 11:49:08.779374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.934 [2024-06-10 11:49:08.779384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.934 [2024-06-10 11:49:08.779392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.934 [2024-06-10 11:49:08.779410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.934 qpair failed and we were unable to recover it. 00:40:43.934 [2024-06-10 11:49:08.789325] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.934 [2024-06-10 11:49:08.789418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.934 [2024-06-10 11:49:08.789437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.934 [2024-06-10 11:49:08.789447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.934 [2024-06-10 11:49:08.789455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.934 [2024-06-10 11:49:08.789473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.934 qpair failed and we were unable to recover it. 00:40:43.934 [2024-06-10 11:49:08.799378] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.934 [2024-06-10 11:49:08.799476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.934 [2024-06-10 11:49:08.799494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.934 [2024-06-10 11:49:08.799504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.934 [2024-06-10 11:49:08.799512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.934 [2024-06-10 11:49:08.799530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.934 qpair failed and we were unable to recover it. 00:40:43.934 [2024-06-10 11:49:08.809364] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.934 [2024-06-10 11:49:08.809449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.934 [2024-06-10 11:49:08.809466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.934 [2024-06-10 11:49:08.809475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.934 [2024-06-10 11:49:08.809484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.934 [2024-06-10 11:49:08.809501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.934 qpair failed and we were unable to recover it. 00:40:43.934 [2024-06-10 11:49:08.819378] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.934 [2024-06-10 11:49:08.819473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.934 [2024-06-10 11:49:08.819491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.934 [2024-06-10 11:49:08.819500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.934 [2024-06-10 11:49:08.819508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.934 [2024-06-10 11:49:08.819526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.934 qpair failed and we were unable to recover it. 00:40:43.934 [2024-06-10 11:49:08.829437] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.934 [2024-06-10 11:49:08.829519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.934 [2024-06-10 11:49:08.829536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.934 [2024-06-10 11:49:08.829545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.934 [2024-06-10 11:49:08.829557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.934 [2024-06-10 11:49:08.829579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.934 qpair failed and we were unable to recover it. 00:40:43.934 [2024-06-10 11:49:08.839457] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.934 [2024-06-10 11:49:08.839541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.934 [2024-06-10 11:49:08.839558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.934 [2024-06-10 11:49:08.839567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.934 [2024-06-10 11:49:08.839580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.934 [2024-06-10 11:49:08.839599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.934 qpair failed and we were unable to recover it. 00:40:43.934 [2024-06-10 11:49:08.849461] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.934 [2024-06-10 11:49:08.849546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.934 [2024-06-10 11:49:08.849563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.934 [2024-06-10 11:49:08.849572] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.934 [2024-06-10 11:49:08.849585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.934 [2024-06-10 11:49:08.849603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.934 qpair failed and we were unable to recover it. 00:40:43.934 [2024-06-10 11:49:08.859510] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.934 [2024-06-10 11:49:08.859598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.934 [2024-06-10 11:49:08.859616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.934 [2024-06-10 11:49:08.859626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.934 [2024-06-10 11:49:08.859634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.934 [2024-06-10 11:49:08.859652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.934 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.869525] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:08.869618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:08.869635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.935 [2024-06-10 11:49:08.869644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.935 [2024-06-10 11:49:08.869653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.935 [2024-06-10 11:49:08.869670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.935 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.879588] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:08.879682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:08.879699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.935 [2024-06-10 11:49:08.879709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.935 [2024-06-10 11:49:08.879717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.935 [2024-06-10 11:49:08.879734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.935 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.889585] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:08.889713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:08.889731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.935 [2024-06-10 11:49:08.889740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.935 [2024-06-10 11:49:08.889748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.935 [2024-06-10 11:49:08.889766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.935 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.899622] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:08.899757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:08.899774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.935 [2024-06-10 11:49:08.899783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.935 [2024-06-10 11:49:08.899791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.935 [2024-06-10 11:49:08.899809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.935 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.909659] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:08.909741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:08.909758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.935 [2024-06-10 11:49:08.909767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.935 [2024-06-10 11:49:08.909776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.935 [2024-06-10 11:49:08.909794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.935 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.919711] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:08.919819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:08.919836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.935 [2024-06-10 11:49:08.919849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.935 [2024-06-10 11:49:08.919857] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.935 [2024-06-10 11:49:08.919875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.935 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.929711] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:08.929797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:08.929814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.935 [2024-06-10 11:49:08.929823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.935 [2024-06-10 11:49:08.929832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.935 [2024-06-10 11:49:08.929849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.935 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.939731] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:08.939835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:08.939853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.935 [2024-06-10 11:49:08.939862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.935 [2024-06-10 11:49:08.939870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.935 [2024-06-10 11:49:08.939888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.935 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.949829] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:08.949921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:08.949938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.935 [2024-06-10 11:49:08.949947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.935 [2024-06-10 11:49:08.949955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.935 [2024-06-10 11:49:08.949972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.935 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.959797] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:08.959879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:08.959897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.935 [2024-06-10 11:49:08.959906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.935 [2024-06-10 11:49:08.959915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.935 [2024-06-10 11:49:08.959932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.935 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.969844] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:08.969929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:08.969946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.935 [2024-06-10 11:49:08.969955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.935 [2024-06-10 11:49:08.969964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.935 [2024-06-10 11:49:08.969982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.935 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.979853] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:08.979936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:08.979953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.935 [2024-06-10 11:49:08.979962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.935 [2024-06-10 11:49:08.979971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.935 [2024-06-10 11:49:08.979989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.935 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.989939] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:08.990042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:08.990060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.935 [2024-06-10 11:49:08.990069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.935 [2024-06-10 11:49:08.990077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.935 [2024-06-10 11:49:08.990095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.935 qpair failed and we were unable to recover it. 00:40:43.935 [2024-06-10 11:49:08.999916] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.935 [2024-06-10 11:49:09.000000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.935 [2024-06-10 11:49:09.000017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.936 [2024-06-10 11:49:09.000026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.936 [2024-06-10 11:49:09.000035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.936 [2024-06-10 11:49:09.000052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.936 qpair failed and we were unable to recover it. 00:40:43.936 [2024-06-10 11:49:09.009947] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.936 [2024-06-10 11:49:09.010035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.936 [2024-06-10 11:49:09.010055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.936 [2024-06-10 11:49:09.010064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.936 [2024-06-10 11:49:09.010073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.936 [2024-06-10 11:49:09.010091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.936 qpair failed and we were unable to recover it. 00:40:43.936 [2024-06-10 11:49:09.019963] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.936 [2024-06-10 11:49:09.020053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.936 [2024-06-10 11:49:09.020070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.936 [2024-06-10 11:49:09.020079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.936 [2024-06-10 11:49:09.020088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.936 [2024-06-10 11:49:09.020106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.936 qpair failed and we were unable to recover it. 00:40:43.936 [2024-06-10 11:49:09.030011] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:43.936 [2024-06-10 11:49:09.030111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:43.936 [2024-06-10 11:49:09.030129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:43.936 [2024-06-10 11:49:09.030138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:43.936 [2024-06-10 11:49:09.030146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:43.936 [2024-06-10 11:49:09.030164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:43.936 qpair failed and we were unable to recover it. 00:40:44.194 [2024-06-10 11:49:09.040017] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.194 [2024-06-10 11:49:09.040101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.194 [2024-06-10 11:49:09.040119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.194 [2024-06-10 11:49:09.040129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.194 [2024-06-10 11:49:09.040137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.194 [2024-06-10 11:49:09.040155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.194 qpair failed and we were unable to recover it. 00:40:44.194 [2024-06-10 11:49:09.050079] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.194 [2024-06-10 11:49:09.050162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.194 [2024-06-10 11:49:09.050180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.194 [2024-06-10 11:49:09.050189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.194 [2024-06-10 11:49:09.050198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.194 [2024-06-10 11:49:09.050219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.194 qpair failed and we were unable to recover it. 00:40:44.194 [2024-06-10 11:49:09.060090] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.194 [2024-06-10 11:49:09.060187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.194 [2024-06-10 11:49:09.060204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.194 [2024-06-10 11:49:09.060213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.194 [2024-06-10 11:49:09.060222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.194 [2024-06-10 11:49:09.060239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.194 qpair failed and we were unable to recover it. 00:40:44.194 [2024-06-10 11:49:09.070141] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.194 [2024-06-10 11:49:09.070227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.194 [2024-06-10 11:49:09.070245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.194 [2024-06-10 11:49:09.070254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.194 [2024-06-10 11:49:09.070263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.194 [2024-06-10 11:49:09.070280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.194 qpair failed and we were unable to recover it. 00:40:44.194 [2024-06-10 11:49:09.080166] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.194 [2024-06-10 11:49:09.080246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.194 [2024-06-10 11:49:09.080263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.194 [2024-06-10 11:49:09.080272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.194 [2024-06-10 11:49:09.080281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.194 [2024-06-10 11:49:09.080299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.194 qpair failed and we were unable to recover it. 00:40:44.194 [2024-06-10 11:49:09.090183] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.194 [2024-06-10 11:49:09.090267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.194 [2024-06-10 11:49:09.090285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.194 [2024-06-10 11:49:09.090295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.194 [2024-06-10 11:49:09.090304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.194 [2024-06-10 11:49:09.090321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.194 qpair failed and we were unable to recover it. 00:40:44.194 [2024-06-10 11:49:09.100215] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.194 [2024-06-10 11:49:09.100300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.194 [2024-06-10 11:49:09.100320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.194 [2024-06-10 11:49:09.100329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.194 [2024-06-10 11:49:09.100338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.194 [2024-06-10 11:49:09.100355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.194 qpair failed and we were unable to recover it. 00:40:44.194 [2024-06-10 11:49:09.110251] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.194 [2024-06-10 11:49:09.110338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.194 [2024-06-10 11:49:09.110355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.194 [2024-06-10 11:49:09.110364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.194 [2024-06-10 11:49:09.110372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.194 [2024-06-10 11:49:09.110389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.194 qpair failed and we were unable to recover it. 00:40:44.194 [2024-06-10 11:49:09.120271] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.194 [2024-06-10 11:49:09.120353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.194 [2024-06-10 11:49:09.120370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.194 [2024-06-10 11:49:09.120380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.194 [2024-06-10 11:49:09.120388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.194 [2024-06-10 11:49:09.120405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.194 qpair failed and we were unable to recover it. 00:40:44.194 [2024-06-10 11:49:09.130294] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.194 [2024-06-10 11:49:09.130459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.194 [2024-06-10 11:49:09.130476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.194 [2024-06-10 11:49:09.130485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.194 [2024-06-10 11:49:09.130493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.194 [2024-06-10 11:49:09.130511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.194 qpair failed and we were unable to recover it. 00:40:44.194 [2024-06-10 11:49:09.140344] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.194 [2024-06-10 11:49:09.140465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.194 [2024-06-10 11:49:09.140482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.194 [2024-06-10 11:49:09.140491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.194 [2024-06-10 11:49:09.140502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.140521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.150376] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.150470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.195 [2024-06-10 11:49:09.150489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.195 [2024-06-10 11:49:09.150498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.195 [2024-06-10 11:49:09.150507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.150525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.160378] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.160462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.195 [2024-06-10 11:49:09.160480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.195 [2024-06-10 11:49:09.160489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.195 [2024-06-10 11:49:09.160498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.160516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.170462] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.170546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.195 [2024-06-10 11:49:09.170564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.195 [2024-06-10 11:49:09.170573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.195 [2024-06-10 11:49:09.170586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.170604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.180429] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.180544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.195 [2024-06-10 11:49:09.180561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.195 [2024-06-10 11:49:09.180570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.195 [2024-06-10 11:49:09.180583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.180601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.190475] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.190566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.195 [2024-06-10 11:49:09.190588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.195 [2024-06-10 11:49:09.190597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.195 [2024-06-10 11:49:09.190606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.190624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.200516] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.200613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.195 [2024-06-10 11:49:09.200630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.195 [2024-06-10 11:49:09.200639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.195 [2024-06-10 11:49:09.200647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.200665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.210526] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.210621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.195 [2024-06-10 11:49:09.210638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.195 [2024-06-10 11:49:09.210647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.195 [2024-06-10 11:49:09.210655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.210673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.220545] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.220637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.195 [2024-06-10 11:49:09.220654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.195 [2024-06-10 11:49:09.220664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.195 [2024-06-10 11:49:09.220672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.220689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.230581] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.230670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.195 [2024-06-10 11:49:09.230687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.195 [2024-06-10 11:49:09.230696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.195 [2024-06-10 11:49:09.230707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.230726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.240544] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.240672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.195 [2024-06-10 11:49:09.240689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.195 [2024-06-10 11:49:09.240699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.195 [2024-06-10 11:49:09.240707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.240725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.250671] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.250773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.195 [2024-06-10 11:49:09.250790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.195 [2024-06-10 11:49:09.250800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.195 [2024-06-10 11:49:09.250808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.250826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.260665] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.260752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.195 [2024-06-10 11:49:09.260769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.195 [2024-06-10 11:49:09.260779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.195 [2024-06-10 11:49:09.260787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.260805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.270699] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.270785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.195 [2024-06-10 11:49:09.270802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.195 [2024-06-10 11:49:09.270812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.195 [2024-06-10 11:49:09.270820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.195 [2024-06-10 11:49:09.270837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.195 qpair failed and we were unable to recover it. 00:40:44.195 [2024-06-10 11:49:09.280736] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.195 [2024-06-10 11:49:09.280824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.196 [2024-06-10 11:49:09.280841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.196 [2024-06-10 11:49:09.280850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.196 [2024-06-10 11:49:09.280859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.196 [2024-06-10 11:49:09.280876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.196 qpair failed and we were unable to recover it. 00:40:44.196 [2024-06-10 11:49:09.290780] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.196 [2024-06-10 11:49:09.290883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.196 [2024-06-10 11:49:09.290900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.196 [2024-06-10 11:49:09.290909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.196 [2024-06-10 11:49:09.290917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.196 [2024-06-10 11:49:09.290935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.196 qpair failed and we were unable to recover it. 00:40:44.454 [2024-06-10 11:49:09.300839] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.454 [2024-06-10 11:49:09.300927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.454 [2024-06-10 11:49:09.300945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.454 [2024-06-10 11:49:09.300955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.454 [2024-06-10 11:49:09.300963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.454 [2024-06-10 11:49:09.300981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.454 qpair failed and we were unable to recover it. 00:40:44.454 [2024-06-10 11:49:09.310834] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.454 [2024-06-10 11:49:09.310918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.454 [2024-06-10 11:49:09.310936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.454 [2024-06-10 11:49:09.310945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.454 [2024-06-10 11:49:09.310954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.454 [2024-06-10 11:49:09.310972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.454 qpair failed and we were unable to recover it. 00:40:44.454 [2024-06-10 11:49:09.320782] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.454 [2024-06-10 11:49:09.320865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.454 [2024-06-10 11:49:09.320882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.454 [2024-06-10 11:49:09.320894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.454 [2024-06-10 11:49:09.320903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.454 [2024-06-10 11:49:09.320920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.454 qpair failed and we were unable to recover it. 00:40:44.454 [2024-06-10 11:49:09.330906] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.454 [2024-06-10 11:49:09.331008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.454 [2024-06-10 11:49:09.331025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.454 [2024-06-10 11:49:09.331034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.454 [2024-06-10 11:49:09.331043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.454 [2024-06-10 11:49:09.331061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.454 qpair failed and we were unable to recover it. 00:40:44.454 [2024-06-10 11:49:09.340912] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.454 [2024-06-10 11:49:09.341000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.454 [2024-06-10 11:49:09.341017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.454 [2024-06-10 11:49:09.341027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.454 [2024-06-10 11:49:09.341035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.454 [2024-06-10 11:49:09.341053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.454 qpair failed and we were unable to recover it. 00:40:44.454 [2024-06-10 11:49:09.350973] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.454 [2024-06-10 11:49:09.351079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.454 [2024-06-10 11:49:09.351096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.454 [2024-06-10 11:49:09.351106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.454 [2024-06-10 11:49:09.351114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.454 [2024-06-10 11:49:09.351131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.454 qpair failed and we were unable to recover it. 00:40:44.454 [2024-06-10 11:49:09.360972] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.454 [2024-06-10 11:49:09.361108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.454 [2024-06-10 11:49:09.361125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.454 [2024-06-10 11:49:09.361134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.454 [2024-06-10 11:49:09.361142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.454 [2024-06-10 11:49:09.361160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.454 qpair failed and we were unable to recover it. 00:40:44.454 [2024-06-10 11:49:09.371024] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.454 [2024-06-10 11:49:09.371107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.454 [2024-06-10 11:49:09.371124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.454 [2024-06-10 11:49:09.371133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.371142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.371159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.381011] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.455 [2024-06-10 11:49:09.381101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.455 [2024-06-10 11:49:09.381118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.455 [2024-06-10 11:49:09.381128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.381136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.381154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.391090] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.455 [2024-06-10 11:49:09.391181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.455 [2024-06-10 11:49:09.391199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.455 [2024-06-10 11:49:09.391208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.391217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.391234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.401093] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.455 [2024-06-10 11:49:09.401177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.455 [2024-06-10 11:49:09.401194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.455 [2024-06-10 11:49:09.401203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.401212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.401230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.411143] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.455 [2024-06-10 11:49:09.411229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.455 [2024-06-10 11:49:09.411250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.455 [2024-06-10 11:49:09.411259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.411268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.411286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.421129] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.455 [2024-06-10 11:49:09.421217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.455 [2024-06-10 11:49:09.421234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.455 [2024-06-10 11:49:09.421243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.421252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.421269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.431187] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.455 [2024-06-10 11:49:09.431281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.455 [2024-06-10 11:49:09.431299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.455 [2024-06-10 11:49:09.431308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.431317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.431334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.441183] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.455 [2024-06-10 11:49:09.441266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.455 [2024-06-10 11:49:09.441283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.455 [2024-06-10 11:49:09.441292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.441301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.441318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.451229] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.455 [2024-06-10 11:49:09.451323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.455 [2024-06-10 11:49:09.451341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.455 [2024-06-10 11:49:09.451350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.451358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.451378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.461259] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.455 [2024-06-10 11:49:09.461346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.455 [2024-06-10 11:49:09.461363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.455 [2024-06-10 11:49:09.461372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.461381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.461398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.471285] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.455 [2024-06-10 11:49:09.471364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.455 [2024-06-10 11:49:09.471382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.455 [2024-06-10 11:49:09.471391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.471400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.471418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.481324] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.455 [2024-06-10 11:49:09.481408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.455 [2024-06-10 11:49:09.481425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.455 [2024-06-10 11:49:09.481434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.481443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.481460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.491348] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.455 [2024-06-10 11:49:09.491430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.455 [2024-06-10 11:49:09.491447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.455 [2024-06-10 11:49:09.491457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.491465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.491483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.501375] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.455 [2024-06-10 11:49:09.501465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.455 [2024-06-10 11:49:09.501484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.455 [2024-06-10 11:49:09.501494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.455 [2024-06-10 11:49:09.501502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.455 [2024-06-10 11:49:09.501519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.455 qpair failed and we were unable to recover it. 00:40:44.455 [2024-06-10 11:49:09.511472] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.456 [2024-06-10 11:49:09.511564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.456 [2024-06-10 11:49:09.511588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.456 [2024-06-10 11:49:09.511597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.456 [2024-06-10 11:49:09.511606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.456 [2024-06-10 11:49:09.511624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.456 qpair failed and we were unable to recover it. 00:40:44.456 [2024-06-10 11:49:09.521443] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.456 [2024-06-10 11:49:09.521527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.456 [2024-06-10 11:49:09.521544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.456 [2024-06-10 11:49:09.521553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.456 [2024-06-10 11:49:09.521562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.456 [2024-06-10 11:49:09.521584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.456 qpair failed and we were unable to recover it. 00:40:44.456 [2024-06-10 11:49:09.531473] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.456 [2024-06-10 11:49:09.531573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.456 [2024-06-10 11:49:09.531594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.456 [2024-06-10 11:49:09.531604] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.456 [2024-06-10 11:49:09.531612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.456 [2024-06-10 11:49:09.531630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.456 qpair failed and we were unable to recover it. 00:40:44.456 [2024-06-10 11:49:09.541472] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.456 [2024-06-10 11:49:09.541590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.456 [2024-06-10 11:49:09.541607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.456 [2024-06-10 11:49:09.541616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.456 [2024-06-10 11:49:09.541625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.456 [2024-06-10 11:49:09.541646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.456 qpair failed and we were unable to recover it. 00:40:44.456 [2024-06-10 11:49:09.551509] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.456 [2024-06-10 11:49:09.551600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.456 [2024-06-10 11:49:09.551617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.456 [2024-06-10 11:49:09.551627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.456 [2024-06-10 11:49:09.551635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.456 [2024-06-10 11:49:09.551652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.456 qpair failed and we were unable to recover it. 00:40:44.715 [2024-06-10 11:49:09.561524] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.715 [2024-06-10 11:49:09.561609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.715 [2024-06-10 11:49:09.561632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.715 [2024-06-10 11:49:09.561645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.715 [2024-06-10 11:49:09.561653] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.715 [2024-06-10 11:49:09.561672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.715 qpair failed and we were unable to recover it. 00:40:44.715 [2024-06-10 11:49:09.571585] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.715 [2024-06-10 11:49:09.571685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.715 [2024-06-10 11:49:09.571703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.715 [2024-06-10 11:49:09.571713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.715 [2024-06-10 11:49:09.571721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.715 [2024-06-10 11:49:09.571739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.715 qpair failed and we were unable to recover it. 00:40:44.715 [2024-06-10 11:49:09.581600] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.715 [2024-06-10 11:49:09.581716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.715 [2024-06-10 11:49:09.581734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.715 [2024-06-10 11:49:09.581743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.715 [2024-06-10 11:49:09.581751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.715 [2024-06-10 11:49:09.581769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.715 qpair failed and we were unable to recover it. 00:40:44.715 [2024-06-10 11:49:09.591648] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.715 [2024-06-10 11:49:09.591739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.715 [2024-06-10 11:49:09.591757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.715 [2024-06-10 11:49:09.591767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.715 [2024-06-10 11:49:09.591775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.715 [2024-06-10 11:49:09.591793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.715 qpair failed and we were unable to recover it. 00:40:44.715 [2024-06-10 11:49:09.601673] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.715 [2024-06-10 11:49:09.601755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.715 [2024-06-10 11:49:09.601773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.715 [2024-06-10 11:49:09.601782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.715 [2024-06-10 11:49:09.601791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.715 [2024-06-10 11:49:09.601809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.715 qpair failed and we were unable to recover it. 00:40:44.715 [2024-06-10 11:49:09.611635] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.715 [2024-06-10 11:49:09.611737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.611754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.611764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.611773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.716 [2024-06-10 11:49:09.611791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.716 qpair failed and we were unable to recover it. 00:40:44.716 [2024-06-10 11:49:09.621765] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.716 [2024-06-10 11:49:09.621850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.621868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.621877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.621885] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.716 [2024-06-10 11:49:09.621903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.716 qpair failed and we were unable to recover it. 00:40:44.716 [2024-06-10 11:49:09.631810] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.716 [2024-06-10 11:49:09.631906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.631923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.631933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.631944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.716 [2024-06-10 11:49:09.631962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.716 qpair failed and we were unable to recover it. 00:40:44.716 [2024-06-10 11:49:09.641795] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.716 [2024-06-10 11:49:09.641880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.641898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.641907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.641916] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.716 [2024-06-10 11:49:09.641934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.716 qpair failed and we were unable to recover it. 00:40:44.716 [2024-06-10 11:49:09.651786] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.716 [2024-06-10 11:49:09.651926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.651943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.651953] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.651961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.716 [2024-06-10 11:49:09.651978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.716 qpair failed and we were unable to recover it. 00:40:44.716 [2024-06-10 11:49:09.661830] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.716 [2024-06-10 11:49:09.661941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.661959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.661969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.661978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.716 [2024-06-10 11:49:09.661996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.716 qpair failed and we were unable to recover it. 00:40:44.716 [2024-06-10 11:49:09.671846] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.716 [2024-06-10 11:49:09.671931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.671949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.671960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.671969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.716 [2024-06-10 11:49:09.671987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.716 qpair failed and we were unable to recover it. 00:40:44.716 [2024-06-10 11:49:09.681838] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.716 [2024-06-10 11:49:09.681946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.681965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.681974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.681984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.716 [2024-06-10 11:49:09.682002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.716 qpair failed and we were unable to recover it. 00:40:44.716 [2024-06-10 11:49:09.691915] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.716 [2024-06-10 11:49:09.692002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.692020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.692029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.692037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.716 [2024-06-10 11:49:09.692054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.716 qpair failed and we were unable to recover it. 00:40:44.716 [2024-06-10 11:49:09.701928] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.716 [2024-06-10 11:49:09.702014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.702032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.702041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.702050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.716 [2024-06-10 11:49:09.702067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.716 qpair failed and we were unable to recover it. 00:40:44.716 [2024-06-10 11:49:09.711923] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.716 [2024-06-10 11:49:09.712005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.712022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.712032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.712040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.716 [2024-06-10 11:49:09.712058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.716 qpair failed and we were unable to recover it. 00:40:44.716 [2024-06-10 11:49:09.722032] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.716 [2024-06-10 11:49:09.722192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.722210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.722224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.722232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.716 [2024-06-10 11:49:09.722250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.716 qpair failed and we were unable to recover it. 00:40:44.716 [2024-06-10 11:49:09.732062] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.716 [2024-06-10 11:49:09.732149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.732167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.732176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.732184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.716 [2024-06-10 11:49:09.732202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.716 qpair failed and we were unable to recover it. 00:40:44.716 [2024-06-10 11:49:09.742030] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.716 [2024-06-10 11:49:09.742118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.716 [2024-06-10 11:49:09.742136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.716 [2024-06-10 11:49:09.742145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.716 [2024-06-10 11:49:09.742153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.717 [2024-06-10 11:49:09.742171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.717 qpair failed and we were unable to recover it. 00:40:44.717 [2024-06-10 11:49:09.752066] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.717 [2024-06-10 11:49:09.752180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.717 [2024-06-10 11:49:09.752198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.717 [2024-06-10 11:49:09.752207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.717 [2024-06-10 11:49:09.752215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.717 [2024-06-10 11:49:09.752233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.717 qpair failed and we were unable to recover it. 00:40:44.717 [2024-06-10 11:49:09.762092] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.717 [2024-06-10 11:49:09.762176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.717 [2024-06-10 11:49:09.762193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.717 [2024-06-10 11:49:09.762202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.717 [2024-06-10 11:49:09.762211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.717 [2024-06-10 11:49:09.762228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.717 qpair failed and we were unable to recover it. 00:40:44.717 [2024-06-10 11:49:09.772132] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.717 [2024-06-10 11:49:09.772228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.717 [2024-06-10 11:49:09.772245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.717 [2024-06-10 11:49:09.772255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.717 [2024-06-10 11:49:09.772263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.717 [2024-06-10 11:49:09.772280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.717 qpair failed and we were unable to recover it. 00:40:44.717 [2024-06-10 11:49:09.782208] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.717 [2024-06-10 11:49:09.782325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.717 [2024-06-10 11:49:09.782342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.717 [2024-06-10 11:49:09.782351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.717 [2024-06-10 11:49:09.782359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.717 [2024-06-10 11:49:09.782376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.717 qpair failed and we were unable to recover it. 00:40:44.717 [2024-06-10 11:49:09.792253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.717 [2024-06-10 11:49:09.792353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.717 [2024-06-10 11:49:09.792370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.717 [2024-06-10 11:49:09.792379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.717 [2024-06-10 11:49:09.792388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.717 [2024-06-10 11:49:09.792406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.717 qpair failed and we were unable to recover it. 00:40:44.717 [2024-06-10 11:49:09.802188] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.717 [2024-06-10 11:49:09.802273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.717 [2024-06-10 11:49:09.802290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.717 [2024-06-10 11:49:09.802300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.717 [2024-06-10 11:49:09.802308] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.717 [2024-06-10 11:49:09.802326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.717 qpair failed and we were unable to recover it. 00:40:44.717 [2024-06-10 11:49:09.812296] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.717 [2024-06-10 11:49:09.812501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.717 [2024-06-10 11:49:09.812520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.717 [2024-06-10 11:49:09.812532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.717 [2024-06-10 11:49:09.812541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.717 [2024-06-10 11:49:09.812559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.717 qpair failed and we were unable to recover it. 00:40:44.976 [2024-06-10 11:49:09.822306] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.976 [2024-06-10 11:49:09.822401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.976 [2024-06-10 11:49:09.822419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.976 [2024-06-10 11:49:09.822429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.976 [2024-06-10 11:49:09.822437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.976 [2024-06-10 11:49:09.822455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.976 qpair failed and we were unable to recover it. 00:40:44.976 [2024-06-10 11:49:09.832357] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.976 [2024-06-10 11:49:09.832446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.976 [2024-06-10 11:49:09.832464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.976 [2024-06-10 11:49:09.832474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.976 [2024-06-10 11:49:09.832482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.976 [2024-06-10 11:49:09.832500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.976 qpair failed and we were unable to recover it. 00:40:44.976 [2024-06-10 11:49:09.842290] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.976 [2024-06-10 11:49:09.842384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.976 [2024-06-10 11:49:09.842402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.976 [2024-06-10 11:49:09.842411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.976 [2024-06-10 11:49:09.842420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.976 [2024-06-10 11:49:09.842438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.976 qpair failed and we were unable to recover it. 00:40:44.976 [2024-06-10 11:49:09.852419] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.976 [2024-06-10 11:49:09.852520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.976 [2024-06-10 11:49:09.852537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.976 [2024-06-10 11:49:09.852547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.976 [2024-06-10 11:49:09.852555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.976 [2024-06-10 11:49:09.852572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.976 qpair failed and we were unable to recover it. 00:40:44.976 [2024-06-10 11:49:09.862455] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.976 [2024-06-10 11:49:09.862572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.976 [2024-06-10 11:49:09.862595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.976 [2024-06-10 11:49:09.862605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.976 [2024-06-10 11:49:09.862613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.976 [2024-06-10 11:49:09.862632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.976 qpair failed and we were unable to recover it. 00:40:44.976 [2024-06-10 11:49:09.872394] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.976 [2024-06-10 11:49:09.872487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.976 [2024-06-10 11:49:09.872504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.976 [2024-06-10 11:49:09.872514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.976 [2024-06-10 11:49:09.872522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.976 [2024-06-10 11:49:09.872540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.976 qpair failed and we were unable to recover it. 00:40:44.976 [2024-06-10 11:49:09.882511] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.976 [2024-06-10 11:49:09.882601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.976 [2024-06-10 11:49:09.882619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.976 [2024-06-10 11:49:09.882628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.976 [2024-06-10 11:49:09.882637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.976 [2024-06-10 11:49:09.882654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.976 qpair failed and we were unable to recover it. 00:40:44.976 [2024-06-10 11:49:09.892444] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.976 [2024-06-10 11:49:09.892610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.976 [2024-06-10 11:49:09.892628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.976 [2024-06-10 11:49:09.892637] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:09.892646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:09.892665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:09.902541] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.977 [2024-06-10 11:49:09.902638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.977 [2024-06-10 11:49:09.902658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.977 [2024-06-10 11:49:09.902667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:09.902676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:09.902693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:09.912592] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.977 [2024-06-10 11:49:09.912682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.977 [2024-06-10 11:49:09.912699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.977 [2024-06-10 11:49:09.912709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:09.912717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:09.912735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:09.922570] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.977 [2024-06-10 11:49:09.922696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.977 [2024-06-10 11:49:09.922713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.977 [2024-06-10 11:49:09.922723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:09.922731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:09.922749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:09.932561] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.977 [2024-06-10 11:49:09.932650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.977 [2024-06-10 11:49:09.932667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.977 [2024-06-10 11:49:09.932677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:09.932685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:09.932703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:09.942675] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.977 [2024-06-10 11:49:09.942765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.977 [2024-06-10 11:49:09.942782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.977 [2024-06-10 11:49:09.942791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:09.942800] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:09.942821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:09.952708] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.977 [2024-06-10 11:49:09.952800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.977 [2024-06-10 11:49:09.952818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.977 [2024-06-10 11:49:09.952827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:09.952835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:09.952853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:09.962727] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.977 [2024-06-10 11:49:09.962811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.977 [2024-06-10 11:49:09.962828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.977 [2024-06-10 11:49:09.962837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:09.962845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:09.962863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:09.972687] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.977 [2024-06-10 11:49:09.972774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.977 [2024-06-10 11:49:09.972792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.977 [2024-06-10 11:49:09.972801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:09.972809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:09.972827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:09.982744] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.977 [2024-06-10 11:49:09.982832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.977 [2024-06-10 11:49:09.982850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.977 [2024-06-10 11:49:09.982859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:09.982867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:09.982884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:09.992815] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.977 [2024-06-10 11:49:09.992945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.977 [2024-06-10 11:49:09.992966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.977 [2024-06-10 11:49:09.992975] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:09.992983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:09.993001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:10.002836] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.977 [2024-06-10 11:49:10.002935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.977 [2024-06-10 11:49:10.002961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.977 [2024-06-10 11:49:10.002976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:10.002988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:10.003015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:10.012948] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.977 [2024-06-10 11:49:10.013042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.977 [2024-06-10 11:49:10.013064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.977 [2024-06-10 11:49:10.013074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:10.013083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:10.013104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:10.022916] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.977 [2024-06-10 11:49:10.023016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.977 [2024-06-10 11:49:10.023040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.977 [2024-06-10 11:49:10.023054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.977 [2024-06-10 11:49:10.023066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.977 [2024-06-10 11:49:10.023105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.977 qpair failed and we were unable to recover it. 00:40:44.977 [2024-06-10 11:49:10.032873] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.978 [2024-06-10 11:49:10.032970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.978 [2024-06-10 11:49:10.032990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.978 [2024-06-10 11:49:10.033000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.978 [2024-06-10 11:49:10.033012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.978 [2024-06-10 11:49:10.033031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.978 qpair failed and we were unable to recover it. 00:40:44.978 [2024-06-10 11:49:10.043018] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.978 [2024-06-10 11:49:10.043111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.978 [2024-06-10 11:49:10.043129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.978 [2024-06-10 11:49:10.043139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.978 [2024-06-10 11:49:10.043147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.978 [2024-06-10 11:49:10.043166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.978 qpair failed and we were unable to recover it. 00:40:44.978 [2024-06-10 11:49:10.052991] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.978 [2024-06-10 11:49:10.053080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.978 [2024-06-10 11:49:10.053097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.978 [2024-06-10 11:49:10.053107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.978 [2024-06-10 11:49:10.053116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.978 [2024-06-10 11:49:10.053133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.978 qpair failed and we were unable to recover it. 00:40:44.978 [2024-06-10 11:49:10.062959] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.978 [2024-06-10 11:49:10.063053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.978 [2024-06-10 11:49:10.063070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.978 [2024-06-10 11:49:10.063080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.978 [2024-06-10 11:49:10.063089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.978 [2024-06-10 11:49:10.063106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.978 qpair failed and we were unable to recover it. 00:40:44.978 [2024-06-10 11:49:10.072961] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:44.978 [2024-06-10 11:49:10.073051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:44.978 [2024-06-10 11:49:10.073069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:44.978 [2024-06-10 11:49:10.073078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:44.978 [2024-06-10 11:49:10.073087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:44.978 [2024-06-10 11:49:10.073104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:44.978 qpair failed and we were unable to recover it. 00:40:45.237 [2024-06-10 11:49:10.083079] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.237 [2024-06-10 11:49:10.083171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.237 [2024-06-10 11:49:10.083191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.237 [2024-06-10 11:49:10.083200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.237 [2024-06-10 11:49:10.083209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.237 [2024-06-10 11:49:10.083227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.237 qpair failed and we were unable to recover it. 00:40:45.237 [2024-06-10 11:49:10.093066] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.237 [2024-06-10 11:49:10.093151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.237 [2024-06-10 11:49:10.093173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.237 [2024-06-10 11:49:10.093184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.237 [2024-06-10 11:49:10.093193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.237 [2024-06-10 11:49:10.093214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.237 qpair failed and we were unable to recover it. 00:40:45.237 [2024-06-10 11:49:10.103191] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.237 [2024-06-10 11:49:10.103324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.237 [2024-06-10 11:49:10.103344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.237 [2024-06-10 11:49:10.103353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.237 [2024-06-10 11:49:10.103362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.237 [2024-06-10 11:49:10.103381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.237 qpair failed and we were unable to recover it. 00:40:45.237 [2024-06-10 11:49:10.113246] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.237 [2024-06-10 11:49:10.113350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.237 [2024-06-10 11:49:10.113368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.237 [2024-06-10 11:49:10.113377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.237 [2024-06-10 11:49:10.113386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.237 [2024-06-10 11:49:10.113404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.237 qpair failed and we were unable to recover it. 00:40:45.237 [2024-06-10 11:49:10.123194] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.237 [2024-06-10 11:49:10.123282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.237 [2024-06-10 11:49:10.123300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.237 [2024-06-10 11:49:10.123313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.237 [2024-06-10 11:49:10.123321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.237 [2024-06-10 11:49:10.123339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.237 qpair failed and we were unable to recover it. 00:40:45.237 [2024-06-10 11:49:10.133215] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.237 [2024-06-10 11:49:10.133300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.237 [2024-06-10 11:49:10.133316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.237 [2024-06-10 11:49:10.133326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.237 [2024-06-10 11:49:10.133334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.237 [2024-06-10 11:49:10.133351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.237 qpair failed and we were unable to recover it. 00:40:45.237 [2024-06-10 11:49:10.143253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.237 [2024-06-10 11:49:10.143337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.237 [2024-06-10 11:49:10.143356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.237 [2024-06-10 11:49:10.143365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.237 [2024-06-10 11:49:10.143373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.237 [2024-06-10 11:49:10.143391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.237 qpair failed and we were unable to recover it. 00:40:45.237 [2024-06-10 11:49:10.153224] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.237 [2024-06-10 11:49:10.153307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.237 [2024-06-10 11:49:10.153326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.237 [2024-06-10 11:49:10.153335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.237 [2024-06-10 11:49:10.153344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.237 [2024-06-10 11:49:10.153361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.237 qpair failed and we were unable to recover it. 00:40:45.237 [2024-06-10 11:49:10.163316] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.237 [2024-06-10 11:49:10.163404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.237 [2024-06-10 11:49:10.163422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.237 [2024-06-10 11:49:10.163432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.237 [2024-06-10 11:49:10.163440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.237 [2024-06-10 11:49:10.163458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.237 qpair failed and we were unable to recover it. 00:40:45.237 [2024-06-10 11:49:10.173251] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.237 [2024-06-10 11:49:10.173337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.237 [2024-06-10 11:49:10.173355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.237 [2024-06-10 11:49:10.173364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.237 [2024-06-10 11:49:10.173372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.237 [2024-06-10 11:49:10.173390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.237 qpair failed and we were unable to recover it. 00:40:45.237 [2024-06-10 11:49:10.183370] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.238 [2024-06-10 11:49:10.183456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.238 [2024-06-10 11:49:10.183473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.238 [2024-06-10 11:49:10.183483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.238 [2024-06-10 11:49:10.183491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.238 [2024-06-10 11:49:10.183509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.238 qpair failed and we were unable to recover it. 00:40:45.238 [2024-06-10 11:49:10.193389] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.238 [2024-06-10 11:49:10.193494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.238 [2024-06-10 11:49:10.193512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.238 [2024-06-10 11:49:10.193521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.238 [2024-06-10 11:49:10.193529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.238 [2024-06-10 11:49:10.193547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.238 qpair failed and we were unable to recover it. 00:40:45.238 [2024-06-10 11:49:10.203447] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.238 [2024-06-10 11:49:10.203540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.238 [2024-06-10 11:49:10.203557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.238 [2024-06-10 11:49:10.203567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.238 [2024-06-10 11:49:10.203581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.238 [2024-06-10 11:49:10.203599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.238 qpair failed and we were unable to recover it. 00:40:45.238 [2024-06-10 11:49:10.213503] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.238 [2024-06-10 11:49:10.213598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.238 [2024-06-10 11:49:10.213616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.238 [2024-06-10 11:49:10.213629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.238 [2024-06-10 11:49:10.213637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.238 [2024-06-10 11:49:10.213655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.238 qpair failed and we were unable to recover it. 00:40:45.238 [2024-06-10 11:49:10.223497] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.238 [2024-06-10 11:49:10.223589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.238 [2024-06-10 11:49:10.223607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.238 [2024-06-10 11:49:10.223617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.238 [2024-06-10 11:49:10.223625] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.238 [2024-06-10 11:49:10.223643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.238 qpair failed and we were unable to recover it. 00:40:45.238 [2024-06-10 11:49:10.233510] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.238 [2024-06-10 11:49:10.233603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.238 [2024-06-10 11:49:10.233621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.238 [2024-06-10 11:49:10.233631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.238 [2024-06-10 11:49:10.233639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.238 [2024-06-10 11:49:10.233657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.238 qpair failed and we were unable to recover it. 00:40:45.238 [2024-06-10 11:49:10.243464] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.238 [2024-06-10 11:49:10.243554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.238 [2024-06-10 11:49:10.243572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.238 [2024-06-10 11:49:10.243588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.238 [2024-06-10 11:49:10.243597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.238 [2024-06-10 11:49:10.243615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.238 qpair failed and we were unable to recover it. 00:40:45.238 [2024-06-10 11:49:10.253520] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.238 [2024-06-10 11:49:10.253613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.238 [2024-06-10 11:49:10.253631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.238 [2024-06-10 11:49:10.253640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.238 [2024-06-10 11:49:10.253649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.238 [2024-06-10 11:49:10.253666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.238 qpair failed and we were unable to recover it. 00:40:45.238 [2024-06-10 11:49:10.263541] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.238 [2024-06-10 11:49:10.263638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.238 [2024-06-10 11:49:10.263655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.238 [2024-06-10 11:49:10.263665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.238 [2024-06-10 11:49:10.263673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.238 [2024-06-10 11:49:10.263691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.238 qpair failed and we were unable to recover it. 00:40:45.238 [2024-06-10 11:49:10.273652] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.238 [2024-06-10 11:49:10.273735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.238 [2024-06-10 11:49:10.273752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.238 [2024-06-10 11:49:10.273762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.238 [2024-06-10 11:49:10.273770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.238 [2024-06-10 11:49:10.273788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.238 qpair failed and we were unable to recover it. 00:40:45.238 [2024-06-10 11:49:10.283670] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.238 [2024-06-10 11:49:10.283766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.238 [2024-06-10 11:49:10.283784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.238 [2024-06-10 11:49:10.283793] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.238 [2024-06-10 11:49:10.283802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.238 [2024-06-10 11:49:10.283819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.238 qpair failed and we were unable to recover it. 00:40:45.238 [2024-06-10 11:49:10.293720] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.238 [2024-06-10 11:49:10.293803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.238 [2024-06-10 11:49:10.293821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.238 [2024-06-10 11:49:10.293830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.238 [2024-06-10 11:49:10.293838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.238 [2024-06-10 11:49:10.293855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.238 qpair failed and we were unable to recover it. 00:40:45.238 [2024-06-10 11:49:10.303719] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.238 [2024-06-10 11:49:10.303806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.238 [2024-06-10 11:49:10.303826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.238 [2024-06-10 11:49:10.303836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.238 [2024-06-10 11:49:10.303844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.238 [2024-06-10 11:49:10.303861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.238 qpair failed and we were unable to recover it. 00:40:45.238 [2024-06-10 11:49:10.313776] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.239 [2024-06-10 11:49:10.313875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.239 [2024-06-10 11:49:10.313892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.239 [2024-06-10 11:49:10.313902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.239 [2024-06-10 11:49:10.313910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.239 [2024-06-10 11:49:10.313927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.239 qpair failed and we were unable to recover it. 00:40:45.239 [2024-06-10 11:49:10.323797] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.239 [2024-06-10 11:49:10.323971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.239 [2024-06-10 11:49:10.323988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.239 [2024-06-10 11:49:10.323998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.239 [2024-06-10 11:49:10.324006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.239 [2024-06-10 11:49:10.324025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.239 qpair failed and we were unable to recover it. 00:40:45.239 [2024-06-10 11:49:10.333770] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.239 [2024-06-10 11:49:10.333853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.239 [2024-06-10 11:49:10.333871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.239 [2024-06-10 11:49:10.333880] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.239 [2024-06-10 11:49:10.333889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.239 [2024-06-10 11:49:10.333906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.239 qpair failed and we were unable to recover it. 00:40:45.497 [2024-06-10 11:49:10.343848] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.497 [2024-06-10 11:49:10.343940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.497 [2024-06-10 11:49:10.343959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.497 [2024-06-10 11:49:10.343968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.497 [2024-06-10 11:49:10.343977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.497 [2024-06-10 11:49:10.343999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.497 qpair failed and we were unable to recover it. 00:40:45.497 [2024-06-10 11:49:10.353899] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.497 [2024-06-10 11:49:10.353984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.497 [2024-06-10 11:49:10.354002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.497 [2024-06-10 11:49:10.354012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.497 [2024-06-10 11:49:10.354020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.497 [2024-06-10 11:49:10.354038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.497 qpair failed and we were unable to recover it. 00:40:45.497 [2024-06-10 11:49:10.363867] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.497 [2024-06-10 11:49:10.363998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.497 [2024-06-10 11:49:10.364014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.497 [2024-06-10 11:49:10.364024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.497 [2024-06-10 11:49:10.364032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.497 [2024-06-10 11:49:10.364050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.497 qpair failed and we were unable to recover it. 00:40:45.497 [2024-06-10 11:49:10.373945] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.497 [2024-06-10 11:49:10.374035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.497 [2024-06-10 11:49:10.374052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.497 [2024-06-10 11:49:10.374061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.497 [2024-06-10 11:49:10.374070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.497 [2024-06-10 11:49:10.374087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.497 qpair failed and we were unable to recover it. 00:40:45.497 [2024-06-10 11:49:10.383985] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.497 [2024-06-10 11:49:10.384095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.497 [2024-06-10 11:49:10.384112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.497 [2024-06-10 11:49:10.384122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.497 [2024-06-10 11:49:10.384130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.497 [2024-06-10 11:49:10.384148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.497 qpair failed and we were unable to recover it. 00:40:45.497 [2024-06-10 11:49:10.394043] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.497 [2024-06-10 11:49:10.394141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.497 [2024-06-10 11:49:10.394162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.497 [2024-06-10 11:49:10.394172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.497 [2024-06-10 11:49:10.394180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4870000b90 00:40:45.497 [2024-06-10 11:49:10.394197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:45.497 qpair failed and we were unable to recover it. 00:40:45.497 [2024-06-10 11:49:10.404169] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.497 [2024-06-10 11:49:10.404362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.497 [2024-06-10 11:49:10.404428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.497 [2024-06-10 11:49:10.404464] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.497 [2024-06-10 11:49:10.404495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4868000b90 00:40:45.497 [2024-06-10 11:49:10.404558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:45.497 qpair failed and we were unable to recover it. 00:40:45.497 [2024-06-10 11:49:10.414131] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:45.497 [2024-06-10 11:49:10.414294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:45.497 [2024-06-10 11:49:10.414328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:45.497 [2024-06-10 11:49:10.414349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:45.497 [2024-06-10 11:49:10.414369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4868000b90 00:40:45.497 [2024-06-10 11:49:10.414408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:45.497 qpair failed and we were unable to recover it. 00:40:45.497 [2024-06-10 11:49:10.414595] nvme_ctrlr.c:4395:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:40:45.497 A controller has encountered a failure and is being reset. 00:40:45.497 [2024-06-10 11:49:10.414695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1215b50 (9): Bad file descriptor 00:40:45.497 qpair failed and we were unable to recover it. 00:40:45.497 Controller properly reset. 00:40:45.497 Initializing NVMe Controllers 00:40:45.497 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:45.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:45.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:40:45.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:40:45.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:40:45.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:40:45.497 Initialization complete. Launching workers. 00:40:45.497 Starting thread on core 1 00:40:45.497 Starting thread on core 2 00:40:45.497 Starting thread on core 3 00:40:45.497 Starting thread on core 0 00:40:45.497 11:49:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:40:45.497 00:40:45.497 real 0m11.549s 00:40:45.497 user 0m20.679s 00:40:45.497 sys 0m5.034s 00:40:45.497 11:49:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:45.498 11:49:10 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:45.498 ************************************ 00:40:45.498 END TEST nvmf_target_disconnect_tc2 00:40:45.498 ************************************ 00:40:45.498 11:49:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:40:45.498 11:49:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:40:45.498 11:49:10 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:40:45.498 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:45.498 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:40:45.498 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:45.498 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:40:45.498 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:45.498 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:45.498 rmmod nvme_tcp 00:40:45.498 rmmod nvme_fabrics 00:40:45.498 rmmod nvme_keyring 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 4177441 ']' 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 4177441 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 4177441 ']' 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 4177441 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4177441 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4177441' 00:40:45.756 killing process with pid 4177441 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 4177441 00:40:45.756 11:49:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 4177441 00:40:46.014 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:46.014 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:46.014 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:46.014 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:46.014 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:46.014 11:49:10 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:46.014 11:49:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:46.014 11:49:10 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:47.917 11:49:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:47.917 00:40:47.917 real 0m23.148s 00:40:47.917 user 0m48.912s 00:40:47.917 sys 0m12.354s 00:40:47.917 11:49:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:47.917 11:49:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:47.917 ************************************ 00:40:47.917 END TEST nvmf_target_disconnect 00:40:47.917 ************************************ 00:40:48.176 11:49:13 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:40:48.176 11:49:13 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:48.176 11:49:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:48.176 11:49:13 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:40:48.176 00:40:48.176 real 25m3.117s 00:40:48.176 user 50m5.883s 00:40:48.176 sys 9m39.130s 00:40:48.176 11:49:13 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:48.176 11:49:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:48.176 ************************************ 00:40:48.176 END TEST nvmf_tcp 00:40:48.176 ************************************ 00:40:48.176 11:49:13 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:40:48.176 11:49:13 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:48.176 11:49:13 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:40:48.176 11:49:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:48.176 11:49:13 -- common/autotest_common.sh@10 -- # set +x 00:40:48.176 ************************************ 00:40:48.176 START TEST spdkcli_nvmf_tcp 00:40:48.176 ************************************ 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:48.176 * Looking for test storage... 00:40:48.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:48.176 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:48.434 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:40:48.434 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4179076 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4179076 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 4179076 ']' 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:48.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:48.435 11:49:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:48.435 [2024-06-10 11:49:13.361733] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:40:48.435 [2024-06-10 11:49:13.361806] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4179076 ] 00:40:48.435 EAL: No free 2048 kB hugepages reported on node 1 00:40:48.435 [2024-06-10 11:49:13.483309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:48.693 [2024-06-10 11:49:13.570268] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:40:48.693 [2024-06-10 11:49:13.570273] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:49.260 11:49:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:49.260 11:49:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:40:49.260 11:49:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:49.260 11:49:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:49.260 11:49:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:49.260 11:49:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:49.260 11:49:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:49.260 11:49:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:49.260 11:49:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:49.260 11:49:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:49.260 11:49:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:49.260 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:49.260 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:49.260 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:49.260 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:49.260 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:49.260 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:49.260 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:49.260 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:49.260 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:49.260 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:49.260 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:49.260 ' 00:40:51.806 [2024-06-10 11:49:16.758928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:53.184 [2024-06-10 11:49:17.935242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:55.089 [2024-06-10 11:49:20.102306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:56.993 [2024-06-10 11:49:21.960385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:58.369 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:58.369 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:58.369 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:58.369 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:58.369 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:58.369 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:58.369 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:58.369 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:58.369 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:58.369 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:58.369 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:58.369 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:58.628 11:49:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:58.628 11:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:58.628 11:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:58.628 11:49:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:58.628 11:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:58.628 11:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:58.628 11:49:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:58.628 11:49:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:58.887 11:49:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:58.887 11:49:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:58.887 11:49:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:58.887 11:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:58.887 11:49:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:59.146 11:49:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:59.146 11:49:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:59.146 11:49:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:59.147 11:49:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:59.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:59.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:59.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:59.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:59.147 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:59.147 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:59.147 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:59.147 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:59.147 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:59.147 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:59.147 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:59.147 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:59.147 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:59.147 ' 00:41:04.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:04.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:04.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:04.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:04.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:04.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:04.422 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:04.422 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:04.422 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:04.422 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:04.422 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:04.422 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:04.422 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:04.422 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4179076 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 4179076 ']' 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 4179076 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4179076 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4179076' 00:41:04.422 killing process with pid 4179076 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 4179076 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 4179076 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4179076 ']' 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4179076 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 4179076 ']' 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 4179076 00:41:04.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (4179076) - No such process 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 4179076 is not found' 00:41:04.422 Process with pid 4179076 is not found 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:04.422 00:41:04.422 real 0m16.238s 00:41:04.422 user 0m33.564s 00:41:04.422 sys 0m0.970s 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:04.422 11:49:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:04.422 ************************************ 00:41:04.422 END TEST spdkcli_nvmf_tcp 00:41:04.422 ************************************ 00:41:04.422 11:49:29 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:04.422 11:49:29 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:41:04.422 11:49:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:04.422 11:49:29 -- common/autotest_common.sh@10 -- # set +x 00:41:04.422 ************************************ 00:41:04.422 START TEST nvmf_identify_passthru 00:41:04.422 ************************************ 00:41:04.422 11:49:29 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:04.682 * Looking for test storage... 00:41:04.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:04.682 11:49:29 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:04.682 11:49:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:04.682 11:49:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:04.682 11:49:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:04.682 11:49:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.682 11:49:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.682 11:49:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.682 11:49:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:04.682 11:49:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:04.682 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:04.683 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:04.683 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:04.683 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:04.683 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:04.683 11:49:29 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:04.683 11:49:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:04.683 11:49:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:04.683 11:49:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:04.683 11:49:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.683 11:49:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.683 11:49:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.683 11:49:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:04.683 11:49:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:04.683 11:49:29 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:04.683 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:04.683 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:04.683 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:04.683 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:04.683 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:04.683 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:04.683 11:49:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:04.683 11:49:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:04.683 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:04.683 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:04.683 11:49:29 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:41:04.683 11:49:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:14.666 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:14.666 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:41:14.666 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:14.666 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:14.666 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:14.666 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:14.666 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:14.666 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:41:14.666 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:14.666 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:41:14.666 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:41:14.666 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:41:14.666 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:14.667 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:14.667 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:14.667 Found net devices under 0000:af:00.0: cvl_0_0 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:14.667 Found net devices under 0000:af:00.1: cvl_0_1 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:14.667 11:49:37 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:14.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:14.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:41:14.667 00:41:14.667 --- 10.0.0.2 ping statistics --- 00:41:14.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.667 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:14.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:14.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:41:14.667 00:41:14.667 --- 10.0.0.1 ping statistics --- 00:41:14.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.667 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:14.667 11:49:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:14.667 11:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:14.667 11:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:d8:00.0 00:41:14.667 11:49:38 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:d8:00.0 00:41:14.667 11:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:41:14.667 11:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:41:14.667 11:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:41:14.667 11:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:14.667 11:49:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:14.667 EAL: No free 2048 kB hugepages reported on node 1 00:41:18.857 11:49:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN036005WL1P6AGN 00:41:18.857 11:49:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:41:18.857 11:49:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:41:18.857 11:49:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:18.857 EAL: No free 2048 kB hugepages reported on node 1 00:41:23.050 11:49:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:41:23.050 11:49:48 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:23.050 11:49:48 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:23.050 11:49:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:23.050 11:49:48 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:23.050 11:49:48 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:23.050 11:49:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:23.050 11:49:48 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4187400 00:41:23.050 11:49:48 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:23.050 11:49:48 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:23.050 11:49:48 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4187400 00:41:23.050 11:49:48 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 4187400 ']' 00:41:23.050 11:49:48 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:23.050 11:49:48 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:23.050 11:49:48 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:23.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:23.050 11:49:48 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:23.050 11:49:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:23.309 [2024-06-10 11:49:48.170907] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:41:23.309 [2024-06-10 11:49:48.170971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:23.309 EAL: No free 2048 kB hugepages reported on node 1 00:41:23.309 [2024-06-10 11:49:48.298597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:23.309 [2024-06-10 11:49:48.384375] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:23.309 [2024-06-10 11:49:48.384423] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:23.309 [2024-06-10 11:49:48.384436] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:23.309 [2024-06-10 11:49:48.384448] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:23.309 [2024-06-10 11:49:48.384458] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:23.310 [2024-06-10 11:49:48.384555] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:41:23.310 [2024-06-10 11:49:48.384683] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:41:23.310 [2024-06-10 11:49:48.385167] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:41:23.310 [2024-06-10 11:49:48.385169] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:41:24.247 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:24.247 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:41:24.247 11:49:49 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:24.247 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.247 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:24.247 INFO: Log level set to 20 00:41:24.247 INFO: Requests: 00:41:24.247 { 00:41:24.247 "jsonrpc": "2.0", 00:41:24.247 "method": "nvmf_set_config", 00:41:24.247 "id": 1, 00:41:24.247 "params": { 00:41:24.247 "admin_cmd_passthru": { 00:41:24.247 "identify_ctrlr": true 00:41:24.247 } 00:41:24.247 } 00:41:24.247 } 00:41:24.247 00:41:24.247 INFO: response: 00:41:24.247 { 00:41:24.247 "jsonrpc": "2.0", 00:41:24.247 "id": 1, 00:41:24.247 "result": true 00:41:24.247 } 00:41:24.247 00:41:24.247 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.247 11:49:49 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:24.247 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.247 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:24.247 INFO: Setting log level to 20 00:41:24.247 INFO: Setting log level to 20 00:41:24.247 INFO: Log level set to 20 00:41:24.247 INFO: Log level set to 20 00:41:24.247 INFO: Requests: 00:41:24.247 { 00:41:24.247 "jsonrpc": "2.0", 00:41:24.247 "method": "framework_start_init", 00:41:24.247 "id": 1 00:41:24.247 } 00:41:24.247 00:41:24.247 INFO: Requests: 00:41:24.247 { 00:41:24.247 "jsonrpc": "2.0", 00:41:24.247 "method": "framework_start_init", 00:41:24.247 "id": 1 00:41:24.247 } 00:41:24.247 00:41:24.247 [2024-06-10 11:49:49.161505] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:24.247 INFO: response: 00:41:24.247 { 00:41:24.247 "jsonrpc": "2.0", 00:41:24.247 "id": 1, 00:41:24.247 "result": true 00:41:24.247 } 00:41:24.247 00:41:24.247 INFO: response: 00:41:24.247 { 00:41:24.247 "jsonrpc": "2.0", 00:41:24.247 "id": 1, 00:41:24.247 "result": true 00:41:24.247 } 00:41:24.247 00:41:24.247 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.247 11:49:49 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:24.247 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.247 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:24.247 INFO: Setting log level to 40 00:41:24.248 INFO: Setting log level to 40 00:41:24.248 INFO: Setting log level to 40 00:41:24.248 [2024-06-10 11:49:49.175198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:24.248 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:24.248 11:49:49 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:24.248 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:24.248 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:24.248 11:49:49 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:41:24.248 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:24.248 11:49:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:27.537 Nvme0n1 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:27.537 [2024-06-10 11:49:52.123650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:27.537 [ 00:41:27.537 { 00:41:27.537 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:27.537 "subtype": "Discovery", 00:41:27.537 "listen_addresses": [], 00:41:27.537 "allow_any_host": true, 00:41:27.537 "hosts": [] 00:41:27.537 }, 00:41:27.537 { 00:41:27.537 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:27.537 "subtype": "NVMe", 00:41:27.537 "listen_addresses": [ 00:41:27.537 { 00:41:27.537 "trtype": "TCP", 00:41:27.537 "adrfam": "IPv4", 00:41:27.537 "traddr": "10.0.0.2", 00:41:27.537 "trsvcid": "4420" 00:41:27.537 } 00:41:27.537 ], 00:41:27.537 "allow_any_host": true, 00:41:27.537 "hosts": [], 00:41:27.537 "serial_number": "SPDK00000000000001", 00:41:27.537 "model_number": "SPDK bdev Controller", 00:41:27.537 "max_namespaces": 1, 00:41:27.537 "min_cntlid": 1, 00:41:27.537 "max_cntlid": 65519, 00:41:27.537 "namespaces": [ 00:41:27.537 { 00:41:27.537 "nsid": 1, 00:41:27.537 "bdev_name": "Nvme0n1", 00:41:27.537 "name": "Nvme0n1", 00:41:27.537 "nguid": "D9287BD6B38849D78695D9789AD890DA", 00:41:27.537 "uuid": "d9287bd6-b388-49d7-8695-d9789ad890da" 00:41:27.537 } 00:41:27.537 ] 00:41:27.537 } 00:41:27.537 ] 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:27.537 EAL: No free 2048 kB hugepages reported on node 1 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN036005WL1P6AGN 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:27.537 EAL: No free 2048 kB hugepages reported on node 1 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN036005WL1P6AGN '!=' PHLN036005WL1P6AGN ']' 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:27.537 11:49:52 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:27.537 11:49:52 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:27.537 11:49:52 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:41:27.537 11:49:52 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:27.537 11:49:52 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:41:27.537 11:49:52 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:27.537 11:49:52 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:27.537 rmmod nvme_tcp 00:41:27.537 rmmod nvme_fabrics 00:41:27.537 rmmod nvme_keyring 00:41:27.537 11:49:52 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:27.537 11:49:52 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:41:27.537 11:49:52 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:41:27.537 11:49:52 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 4187400 ']' 00:41:27.537 11:49:52 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 4187400 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 4187400 ']' 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 4187400 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:27.537 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 4187400 00:41:27.796 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:27.796 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:27.796 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 4187400' 00:41:27.796 killing process with pid 4187400 00:41:27.796 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 4187400 00:41:27.796 11:49:52 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 4187400 00:41:29.699 11:49:54 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:29.699 11:49:54 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:29.699 11:49:54 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:29.699 11:49:54 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:29.699 11:49:54 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:29.699 11:49:54 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:29.699 11:49:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:29.699 11:49:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:32.238 11:49:56 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:32.238 00:41:32.238 real 0m27.248s 00:41:32.238 user 0m33.946s 00:41:32.238 sys 0m8.319s 00:41:32.238 11:49:56 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:32.238 11:49:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:32.238 ************************************ 00:41:32.238 END TEST nvmf_identify_passthru 00:41:32.238 ************************************ 00:41:32.239 11:49:56 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:32.239 11:49:56 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:41:32.239 11:49:56 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:32.239 11:49:56 -- common/autotest_common.sh@10 -- # set +x 00:41:32.239 ************************************ 00:41:32.239 START TEST nvmf_dif 00:41:32.239 ************************************ 00:41:32.239 11:49:56 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:32.239 * Looking for test storage... 00:41:32.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:32.239 11:49:56 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:32.239 11:49:56 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:32.239 11:49:56 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:32.239 11:49:56 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:32.239 11:49:56 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:32.239 11:49:56 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:32.239 11:49:56 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:32.239 11:49:56 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:32.239 11:49:56 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:32.239 11:49:56 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:32.239 11:49:56 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:32.239 11:49:56 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:32.239 11:49:56 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:32.239 11:49:56 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:32.239 11:49:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:32.239 11:49:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:32.239 11:49:56 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:41:32.239 11:49:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:40.364 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:40.364 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:40.364 Found net devices under 0000:af:00.0: cvl_0_0 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:40.364 Found net devices under 0000:af:00.1: cvl_0_1 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:40.364 11:50:04 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:40.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:40.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:41:40.364 00:41:40.364 --- 10.0.0.2 ping statistics --- 00:41:40.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.365 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:41:40.365 11:50:04 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:40.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:40.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:41:40.365 00:41:40.365 --- 10.0.0.1 ping statistics --- 00:41:40.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.365 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:41:40.365 11:50:05 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:40.365 11:50:05 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:41:40.365 11:50:05 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:41:40.365 11:50:05 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:44.562 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:41:44.562 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:44.562 11:50:09 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:44.562 11:50:09 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:44.562 11:50:09 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:44.562 11:50:09 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:44.562 11:50:09 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:44.562 11:50:09 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:44.562 11:50:09 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:44.562 11:50:09 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:44.562 11:50:09 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:44.562 11:50:09 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:44.562 11:50:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:44.562 11:50:09 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=537 00:41:44.562 11:50:09 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 537 00:41:44.562 11:50:09 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 537 ']' 00:41:44.562 11:50:09 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:44.562 11:50:09 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:44.562 11:50:09 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:44.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:44.562 11:50:09 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:44.562 11:50:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:44.562 11:50:09 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:44.562 [2024-06-10 11:50:09.293887] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:41:44.562 [2024-06-10 11:50:09.293942] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:44.562 EAL: No free 2048 kB hugepages reported on node 1 00:41:44.562 [2024-06-10 11:50:09.418732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:44.562 [2024-06-10 11:50:09.507865] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:44.562 [2024-06-10 11:50:09.507907] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:44.562 [2024-06-10 11:50:09.507920] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:44.562 [2024-06-10 11:50:09.507932] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:44.562 [2024-06-10 11:50:09.507943] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:44.562 [2024-06-10 11:50:09.507969] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:41:45.131 11:50:10 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:45.131 11:50:10 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:41:45.131 11:50:10 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:45.131 11:50:10 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:45.131 11:50:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:45.131 11:50:10 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:45.131 11:50:10 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:45.131 11:50:10 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:45.131 11:50:10 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.131 11:50:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:45.131 [2024-06-10 11:50:10.232061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:45.391 11:50:10 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.391 11:50:10 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:45.391 11:50:10 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:41:45.391 11:50:10 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:45.391 11:50:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:45.391 ************************************ 00:41:45.391 START TEST fio_dif_1_default 00:41:45.391 ************************************ 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:45.391 bdev_null0 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:45.391 [2024-06-10 11:50:10.300391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:45.391 { 00:41:45.391 "params": { 00:41:45.391 "name": "Nvme$subsystem", 00:41:45.391 "trtype": "$TEST_TRANSPORT", 00:41:45.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:45.391 "adrfam": "ipv4", 00:41:45.391 "trsvcid": "$NVMF_PORT", 00:41:45.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:45.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:45.391 "hdgst": ${hdgst:-false}, 00:41:45.391 "ddgst": ${ddgst:-false} 00:41:45.391 }, 00:41:45.391 "method": "bdev_nvme_attach_controller" 00:41:45.391 } 00:41:45.391 EOF 00:41:45.391 )") 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:41:45.391 11:50:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:45.391 "params": { 00:41:45.391 "name": "Nvme0", 00:41:45.391 "trtype": "tcp", 00:41:45.391 "traddr": "10.0.0.2", 00:41:45.391 "adrfam": "ipv4", 00:41:45.391 "trsvcid": "4420", 00:41:45.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:45.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:45.391 "hdgst": false, 00:41:45.392 "ddgst": false 00:41:45.392 }, 00:41:45.392 "method": "bdev_nvme_attach_controller" 00:41:45.392 }' 00:41:45.392 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:45.392 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:45.392 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:45.392 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:45.392 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:41:45.392 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:45.392 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:45.392 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:45.392 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:45.392 11:50:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:45.651 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:45.651 fio-3.35 00:41:45.651 Starting 1 thread 00:41:45.911 EAL: No free 2048 kB hugepages reported on node 1 00:41:58.217 00:41:58.217 filename0: (groupid=0, jobs=1): err= 0: pid=1119: Mon Jun 10 11:50:21 2024 00:41:58.217 read: IOPS=188, BW=754KiB/s (772kB/s)(7536KiB/10001msec) 00:41:58.217 slat (nsec): min=7987, max=52661, avg=8298.72, stdev=1731.96 00:41:58.217 clat (usec): min=611, max=44286, avg=21209.81, stdev=20418.49 00:41:58.217 lat (usec): min=619, max=44311, avg=21218.11, stdev=20418.41 00:41:58.217 clat percentiles (usec): 00:41:58.217 | 1.00th=[ 619], 5.00th=[ 619], 10.00th=[ 627], 20.00th=[ 644], 00:41:58.217 | 30.00th=[ 660], 40.00th=[ 930], 50.00th=[41157], 60.00th=[41157], 00:41:58.217 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:41:58.217 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:41:58.217 | 99.99th=[44303] 00:41:58.217 bw ( KiB/s): min= 672, max= 768, per=99.80%, avg=752.84, stdev=26.92, samples=19 00:41:58.217 iops : min= 168, max= 192, avg=188.21, stdev= 6.73, samples=19 00:41:58.217 lat (usec) : 750=30.79%, 1000=16.19% 00:41:58.217 lat (msec) : 2=2.92%, 50=50.11% 00:41:58.217 cpu : usr=85.26%, sys=14.45%, ctx=13, majf=0, minf=207 00:41:58.217 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:58.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.217 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.217 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:58.217 00:41:58.217 Run status group 0 (all jobs): 00:41:58.217 READ: bw=754KiB/s (772kB/s), 754KiB/s-754KiB/s (772kB/s-772kB/s), io=7536KiB (7717kB), run=10001-10001msec 00:41:58.217 11:50:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:58.217 11:50:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:58.217 11:50:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:58.217 11:50:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:58.217 11:50:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:58.217 11:50:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:58.217 11:50:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:58.217 11:50:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:58.217 11:50:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:58.217 11:50:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:58.217 11:50:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:58.217 11:50:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:58.217 11:50:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:58.217 00:41:58.217 real 0m11.321s 00:41:58.218 user 0m20.212s 00:41:58.218 sys 0m1.815s 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:58.218 ************************************ 00:41:58.218 END TEST fio_dif_1_default 00:41:58.218 ************************************ 00:41:58.218 11:50:21 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:58.218 11:50:21 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:41:58.218 11:50:21 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:58.218 11:50:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:58.218 ************************************ 00:41:58.218 START TEST fio_dif_1_multi_subsystems 00:41:58.218 ************************************ 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:58.218 bdev_null0 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:58.218 [2024-06-10 11:50:21.712910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:58.218 bdev_null1 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:58.218 { 00:41:58.218 "params": { 00:41:58.218 "name": "Nvme$subsystem", 00:41:58.218 "trtype": "$TEST_TRANSPORT", 00:41:58.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:58.218 "adrfam": "ipv4", 00:41:58.218 "trsvcid": "$NVMF_PORT", 00:41:58.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:58.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:58.218 "hdgst": ${hdgst:-false}, 00:41:58.218 "ddgst": ${ddgst:-false} 00:41:58.218 }, 00:41:58.218 "method": "bdev_nvme_attach_controller" 00:41:58.218 } 00:41:58.218 EOF 00:41:58.218 )") 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:58.218 { 00:41:58.218 "params": { 00:41:58.218 "name": "Nvme$subsystem", 00:41:58.218 "trtype": "$TEST_TRANSPORT", 00:41:58.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:58.218 "adrfam": "ipv4", 00:41:58.218 "trsvcid": "$NVMF_PORT", 00:41:58.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:58.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:58.218 "hdgst": ${hdgst:-false}, 00:41:58.218 "ddgst": ${ddgst:-false} 00:41:58.218 }, 00:41:58.218 "method": "bdev_nvme_attach_controller" 00:41:58.218 } 00:41:58.218 EOF 00:41:58.218 )") 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:58.218 "params": { 00:41:58.218 "name": "Nvme0", 00:41:58.218 "trtype": "tcp", 00:41:58.218 "traddr": "10.0.0.2", 00:41:58.218 "adrfam": "ipv4", 00:41:58.218 "trsvcid": "4420", 00:41:58.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:58.218 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:58.218 "hdgst": false, 00:41:58.218 "ddgst": false 00:41:58.218 }, 00:41:58.218 "method": "bdev_nvme_attach_controller" 00:41:58.218 },{ 00:41:58.218 "params": { 00:41:58.218 "name": "Nvme1", 00:41:58.218 "trtype": "tcp", 00:41:58.218 "traddr": "10.0.0.2", 00:41:58.218 "adrfam": "ipv4", 00:41:58.218 "trsvcid": "4420", 00:41:58.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:58.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:58.218 "hdgst": false, 00:41:58.218 "ddgst": false 00:41:58.218 }, 00:41:58.218 "method": "bdev_nvme_attach_controller" 00:41:58.218 }' 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:58.218 11:50:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:58.218 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:58.218 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:58.218 fio-3.35 00:41:58.218 Starting 2 threads 00:41:58.218 EAL: No free 2048 kB hugepages reported on node 1 00:42:08.194 00:42:08.194 filename0: (groupid=0, jobs=1): err= 0: pid=3283: Mon Jun 10 11:50:33 2024 00:42:08.194 read: IOPS=95, BW=381KiB/s (391kB/s)(3824KiB/10025msec) 00:42:08.194 slat (nsec): min=8107, max=40090, avg=9888.25, stdev=2879.01 00:42:08.194 clat (usec): min=40863, max=43077, avg=41914.22, stdev=372.76 00:42:08.194 lat (usec): min=40871, max=43091, avg=41924.11, stdev=373.05 00:42:08.194 clat percentiles (usec): 00:42:08.194 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:42:08.194 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:42:08.194 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:08.194 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:42:08.194 | 99.99th=[43254] 00:42:08.194 bw ( KiB/s): min= 352, max= 384, per=49.81%, avg=380.80, stdev= 9.85, samples=20 00:42:08.194 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:42:08.194 lat (msec) : 50=100.00% 00:42:08.194 cpu : usr=93.15%, sys=6.58%, ctx=17, majf=0, minf=99 00:42:08.194 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:08.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.194 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:08.194 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:08.194 filename1: (groupid=0, jobs=1): err= 0: pid=3284: Mon Jun 10 11:50:33 2024 00:42:08.194 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10014msec) 00:42:08.194 slat (nsec): min=5665, max=30653, avg=7607.56, stdev=2972.54 00:42:08.194 clat (usec): min=40856, max=43031, avg=41874.01, stdev=375.50 00:42:08.194 lat (usec): min=40862, max=43043, avg=41881.62, stdev=375.56 00:42:08.194 clat percentiles (usec): 00:42:08.194 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:42:08.194 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:42:08.194 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:08.194 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:42:08.194 | 99.99th=[43254] 00:42:08.194 bw ( KiB/s): min= 352, max= 384, per=49.81%, avg=380.80, stdev= 9.85, samples=20 00:42:08.194 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:42:08.194 lat (msec) : 50=100.00% 00:42:08.194 cpu : usr=93.64%, sys=6.09%, ctx=15, majf=0, minf=143 00:42:08.194 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:08.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.194 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:08.194 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:08.194 00:42:08.194 Run status group 0 (all jobs): 00:42:08.194 READ: bw=763KiB/s (781kB/s), 381KiB/s-382KiB/s (391kB/s-391kB/s), io=7648KiB (7832kB), run=10014-10025msec 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.194 00:42:08.194 real 0m11.550s 00:42:08.194 user 0m31.208s 00:42:08.194 sys 0m1.671s 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:08.194 11:50:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:08.194 ************************************ 00:42:08.194 END TEST fio_dif_1_multi_subsystems 00:42:08.194 ************************************ 00:42:08.195 11:50:33 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:08.195 11:50:33 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:42:08.195 11:50:33 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:08.195 11:50:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:08.454 ************************************ 00:42:08.454 START TEST fio_dif_rand_params 00:42:08.454 ************************************ 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:08.454 bdev_null0 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:08.454 [2024-06-10 11:50:33.337544] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:08.454 { 00:42:08.454 "params": { 00:42:08.454 "name": "Nvme$subsystem", 00:42:08.454 "trtype": "$TEST_TRANSPORT", 00:42:08.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:08.454 "adrfam": "ipv4", 00:42:08.454 "trsvcid": "$NVMF_PORT", 00:42:08.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:08.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:08.454 "hdgst": ${hdgst:-false}, 00:42:08.454 "ddgst": ${ddgst:-false} 00:42:08.454 }, 00:42:08.454 "method": "bdev_nvme_attach_controller" 00:42:08.454 } 00:42:08.454 EOF 00:42:08.454 )") 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:42:08.454 "params": { 00:42:08.454 "name": "Nvme0", 00:42:08.454 "trtype": "tcp", 00:42:08.454 "traddr": "10.0.0.2", 00:42:08.454 "adrfam": "ipv4", 00:42:08.454 "trsvcid": "4420", 00:42:08.454 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:08.454 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:08.454 "hdgst": false, 00:42:08.454 "ddgst": false 00:42:08.454 }, 00:42:08.454 "method": "bdev_nvme_attach_controller" 00:42:08.454 }' 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:08.454 11:50:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:08.713 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:08.713 ... 00:42:08.713 fio-3.35 00:42:08.713 Starting 3 threads 00:42:08.713 EAL: No free 2048 kB hugepages reported on node 1 00:42:15.286 00:42:15.286 filename0: (groupid=0, jobs=1): err= 0: pid=5342: Mon Jun 10 11:50:39 2024 00:42:15.286 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(126MiB/5047msec) 00:42:15.286 slat (nsec): min=8180, max=34108, avg=11808.39, stdev=2768.28 00:42:15.286 clat (usec): min=5464, max=94344, avg=14962.69, stdev=14789.52 00:42:15.286 lat (usec): min=5481, max=94359, avg=14974.50, stdev=14789.77 00:42:15.286 clat percentiles (usec): 00:42:15.286 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 7701], 00:42:15.286 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10683], 00:42:15.286 | 70.00th=[12125], 80.00th=[13173], 90.00th=[50070], 95.00th=[52691], 00:42:15.286 | 99.00th=[55313], 99.50th=[57410], 99.90th=[91751], 99.95th=[93848], 00:42:15.286 | 99.99th=[93848] 00:42:15.286 bw ( KiB/s): min=19968, max=34560, per=33.80%, avg=25728.00, stdev=4076.84, samples=10 00:42:15.286 iops : min= 156, max= 270, avg=201.00, stdev=31.85, samples=10 00:42:15.286 lat (msec) : 10=55.16%, 20=32.64%, 50=2.28%, 100=9.92% 00:42:15.286 cpu : usr=91.52%, sys=8.11%, ctx=10, majf=0, minf=85 00:42:15.286 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:15.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:15.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:15.286 issued rwts: total=1008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:15.286 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:15.286 filename0: (groupid=0, jobs=1): err= 0: pid=5343: Mon Jun 10 11:50:39 2024 00:42:15.286 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(125MiB/5046msec) 00:42:15.286 slat (nsec): min=8212, max=25614, avg=11888.99, stdev=2700.66 00:42:15.286 clat (usec): min=5292, max=90996, avg=15108.05, stdev=14139.13 00:42:15.286 lat (usec): min=5303, max=91007, avg=15119.94, stdev=14139.27 00:42:15.286 clat percentiles (usec): 00:42:15.286 | 1.00th=[ 5538], 5.00th=[ 6128], 10.00th=[ 6587], 20.00th=[ 8455], 00:42:15.286 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10814], 00:42:15.286 | 70.00th=[12387], 80.00th=[13566], 90.00th=[50070], 95.00th=[52691], 00:42:15.286 | 99.00th=[55837], 99.50th=[55837], 99.90th=[90702], 99.95th=[90702], 00:42:15.286 | 99.99th=[90702] 00:42:15.286 bw ( KiB/s): min=18432, max=33280, per=33.47%, avg=25472.00, stdev=5397.29, samples=10 00:42:15.286 iops : min= 144, max= 260, avg=199.00, stdev=42.17, samples=10 00:42:15.286 lat (msec) : 10=51.60%, 20=36.27%, 50=1.70%, 100=10.42% 00:42:15.286 cpu : usr=91.26%, sys=8.36%, ctx=9, majf=0, minf=64 00:42:15.286 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:15.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:15.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:15.286 issued rwts: total=998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:15.286 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:15.286 filename0: (groupid=0, jobs=1): err= 0: pid=5344: Mon Jun 10 11:50:39 2024 00:42:15.286 read: IOPS=197, BW=24.6MiB/s (25.8MB/s)(124MiB/5046msec) 00:42:15.286 slat (nsec): min=8207, max=34553, avg=11601.34, stdev=3052.43 00:42:15.286 clat (usec): min=5098, max=55829, avg=15194.82, stdev=14036.74 00:42:15.286 lat (usec): min=5108, max=55838, avg=15206.42, stdev=14036.95 00:42:15.286 clat percentiles (usec): 00:42:15.286 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6587], 20.00th=[ 8160], 00:42:15.286 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[11207], 00:42:15.286 | 70.00th=[12256], 80.00th=[13304], 90.00th=[50070], 95.00th=[51643], 00:42:15.286 | 99.00th=[54264], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:42:15.286 | 99.99th=[55837] 00:42:15.286 bw ( KiB/s): min=22272, max=30720, per=33.37%, avg=25395.20, stdev=3146.48, samples=10 00:42:15.286 iops : min= 174, max= 240, avg=198.40, stdev=24.58, samples=10 00:42:15.286 lat (msec) : 10=48.24%, 20=39.10%, 50=2.51%, 100=10.15% 00:42:15.286 cpu : usr=91.00%, sys=8.58%, ctx=9, majf=0, minf=114 00:42:15.286 IO depths : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:15.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:15.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:15.286 issued rwts: total=995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:15.286 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:15.286 00:42:15.286 Run status group 0 (all jobs): 00:42:15.286 READ: bw=74.3MiB/s (77.9MB/s), 24.6MiB/s-25.0MiB/s (25.8MB/s-26.2MB/s), io=375MiB (393MB), run=5046-5047msec 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.286 bdev_null0 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.286 [2024-06-10 11:50:39.698415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.286 bdev_null1 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:15.286 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.287 bdev_null2 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:15.287 { 00:42:15.287 "params": { 00:42:15.287 "name": "Nvme$subsystem", 00:42:15.287 "trtype": "$TEST_TRANSPORT", 00:42:15.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:15.287 "adrfam": "ipv4", 00:42:15.287 "trsvcid": "$NVMF_PORT", 00:42:15.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:15.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:15.287 "hdgst": ${hdgst:-false}, 00:42:15.287 "ddgst": ${ddgst:-false} 00:42:15.287 }, 00:42:15.287 "method": "bdev_nvme_attach_controller" 00:42:15.287 } 00:42:15.287 EOF 00:42:15.287 )") 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:15.287 { 00:42:15.287 "params": { 00:42:15.287 "name": "Nvme$subsystem", 00:42:15.287 "trtype": "$TEST_TRANSPORT", 00:42:15.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:15.287 "adrfam": "ipv4", 00:42:15.287 "trsvcid": "$NVMF_PORT", 00:42:15.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:15.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:15.287 "hdgst": ${hdgst:-false}, 00:42:15.287 "ddgst": ${ddgst:-false} 00:42:15.287 }, 00:42:15.287 "method": "bdev_nvme_attach_controller" 00:42:15.287 } 00:42:15.287 EOF 00:42:15.287 )") 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:15.287 { 00:42:15.287 "params": { 00:42:15.287 "name": "Nvme$subsystem", 00:42:15.287 "trtype": "$TEST_TRANSPORT", 00:42:15.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:15.287 "adrfam": "ipv4", 00:42:15.287 "trsvcid": "$NVMF_PORT", 00:42:15.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:15.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:15.287 "hdgst": ${hdgst:-false}, 00:42:15.287 "ddgst": ${ddgst:-false} 00:42:15.287 }, 00:42:15.287 "method": "bdev_nvme_attach_controller" 00:42:15.287 } 00:42:15.287 EOF 00:42:15.287 )") 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:42:15.287 "params": { 00:42:15.287 "name": "Nvme0", 00:42:15.287 "trtype": "tcp", 00:42:15.287 "traddr": "10.0.0.2", 00:42:15.287 "adrfam": "ipv4", 00:42:15.287 "trsvcid": "4420", 00:42:15.287 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:15.287 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:15.287 "hdgst": false, 00:42:15.287 "ddgst": false 00:42:15.287 }, 00:42:15.287 "method": "bdev_nvme_attach_controller" 00:42:15.287 },{ 00:42:15.287 "params": { 00:42:15.287 "name": "Nvme1", 00:42:15.287 "trtype": "tcp", 00:42:15.287 "traddr": "10.0.0.2", 00:42:15.287 "adrfam": "ipv4", 00:42:15.287 "trsvcid": "4420", 00:42:15.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:15.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:15.287 "hdgst": false, 00:42:15.287 "ddgst": false 00:42:15.287 }, 00:42:15.287 "method": "bdev_nvme_attach_controller" 00:42:15.287 },{ 00:42:15.287 "params": { 00:42:15.287 "name": "Nvme2", 00:42:15.287 "trtype": "tcp", 00:42:15.287 "traddr": "10.0.0.2", 00:42:15.287 "adrfam": "ipv4", 00:42:15.287 "trsvcid": "4420", 00:42:15.287 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:15.287 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:15.287 "hdgst": false, 00:42:15.287 "ddgst": false 00:42:15.287 }, 00:42:15.287 "method": "bdev_nvme_attach_controller" 00:42:15.287 }' 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:15.287 11:50:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:15.287 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:15.287 ... 00:42:15.287 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:15.287 ... 00:42:15.287 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:15.287 ... 00:42:15.287 fio-3.35 00:42:15.287 Starting 24 threads 00:42:15.287 EAL: No free 2048 kB hugepages reported on node 1 00:42:27.489 00:42:27.489 filename0: (groupid=0, jobs=1): err= 0: pid=6563: Mon Jun 10 11:50:51 2024 00:42:27.489 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.2MiB/10005msec) 00:42:27.489 slat (nsec): min=8251, max=44663, avg=16123.72, stdev=4815.31 00:42:27.489 clat (usec): min=3509, max=62108, avg=32401.70, stdev=5237.45 00:42:27.489 lat (usec): min=3529, max=62122, avg=32417.82, stdev=5238.43 00:42:27.489 clat percentiles (usec): 00:42:27.489 | 1.00th=[ 6587], 5.00th=[21103], 10.00th=[32113], 20.00th=[32375], 00:42:27.489 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:27.489 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:42:27.489 | 99.00th=[49021], 99.50th=[49546], 99.90th=[62129], 99.95th=[62129], 00:42:27.489 | 99.99th=[62129] 00:42:27.489 bw ( KiB/s): min= 1904, max= 2400, per=4.31%, avg=1970.53, stdev=133.89, samples=19 00:42:27.489 iops : min= 476, max= 600, avg=492.63, stdev=33.47, samples=19 00:42:27.489 lat (msec) : 4=0.24%, 10=0.98%, 20=3.39%, 50=95.02%, 100=0.37% 00:42:27.489 cpu : usr=96.44%, sys=3.17%, ctx=5, majf=0, minf=35 00:42:27.489 IO depths : 1=5.1%, 2=10.7%, 4=22.8%, 8=53.8%, 16=7.5%, 32=0.0%, >=64=0.0% 00:42:27.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.489 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.489 issued rwts: total=4920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.489 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.489 filename0: (groupid=0, jobs=1): err= 0: pid=6564: Mon Jun 10 11:50:51 2024 00:42:27.489 read: IOPS=477, BW=1912KiB/s (1958kB/s)(18.7MiB/10009msec) 00:42:27.489 slat (nsec): min=8186, max=73400, avg=17938.80, stdev=9193.31 00:42:27.489 clat (usec): min=13187, max=56554, avg=33371.23, stdev=1931.10 00:42:27.489 lat (usec): min=13201, max=56570, avg=33389.17, stdev=1930.43 00:42:27.489 clat percentiles (usec): 00:42:27.489 | 1.00th=[31065], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:42:27.489 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:27.489 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:42:27.489 | 99.00th=[37487], 99.50th=[50594], 99.90th=[56361], 99.95th=[56361], 00:42:27.489 | 99.99th=[56361] 00:42:27.489 bw ( KiB/s): min= 1792, max= 1968, per=4.17%, avg=1906.53, stdev=44.06, samples=19 00:42:27.489 iops : min= 448, max= 492, avg=476.63, stdev=11.02, samples=19 00:42:27.489 lat (msec) : 20=0.13%, 50=99.25%, 100=0.63% 00:42:27.489 cpu : usr=97.01%, sys=2.59%, ctx=13, majf=0, minf=31 00:42:27.489 IO depths : 1=1.8%, 2=4.0%, 4=8.7%, 8=70.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:42:27.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.489 complete : 0=0.0%, 4=91.0%, 8=7.1%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.489 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.489 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.489 filename0: (groupid=0, jobs=1): err= 0: pid=6565: Mon Jun 10 11:50:51 2024 00:42:27.489 read: IOPS=484, BW=1939KiB/s (1986kB/s)(18.9MiB/10001msec) 00:42:27.489 slat (nsec): min=8889, max=56464, avg=17733.93, stdev=7248.11 00:42:27.489 clat (usec): min=4071, max=44225, avg=32862.97, stdev=3208.86 00:42:27.489 lat (usec): min=4087, max=44265, avg=32880.70, stdev=3209.01 00:42:27.489 clat percentiles (usec): 00:42:27.489 | 1.00th=[ 9896], 5.00th=[32375], 10.00th=[32375], 20.00th=[32900], 00:42:27.489 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:27.489 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:42:27.489 | 99.00th=[34341], 99.50th=[34866], 99.90th=[44303], 99.95th=[44303], 00:42:27.489 | 99.99th=[44303] 00:42:27.489 bw ( KiB/s): min= 1792, max= 2304, per=4.24%, avg=1940.21, stdev=97.88, samples=19 00:42:27.489 iops : min= 448, max= 576, avg=485.05, stdev=24.47, samples=19 00:42:27.489 lat (msec) : 10=1.13%, 20=0.19%, 50=98.68% 00:42:27.489 cpu : usr=96.77%, sys=2.84%, ctx=9, majf=0, minf=29 00:42:27.489 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:27.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.489 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.489 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.489 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.489 filename0: (groupid=0, jobs=1): err= 0: pid=6566: Mon Jun 10 11:50:51 2024 00:42:27.489 read: IOPS=477, BW=1912KiB/s (1958kB/s)(18.7MiB/10010msec) 00:42:27.489 slat (nsec): min=6174, max=78209, avg=29207.14, stdev=10495.01 00:42:27.489 clat (usec): min=16650, max=80317, avg=33214.86, stdev=3015.47 00:42:27.489 lat (usec): min=16666, max=80335, avg=33244.07, stdev=3014.48 00:42:27.489 clat percentiles (usec): 00:42:27.489 | 1.00th=[28967], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:42:27.489 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:42:27.489 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:42:27.489 | 99.00th=[38011], 99.50th=[41157], 99.90th=[80217], 99.95th=[80217], 00:42:27.489 | 99.99th=[80217] 00:42:27.489 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1906.53, stdev=72.59, samples=19 00:42:27.489 iops : min= 416, max= 512, avg=476.63, stdev=18.15, samples=19 00:42:27.489 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:42:27.489 cpu : usr=96.63%, sys=2.97%, ctx=12, majf=0, minf=34 00:42:27.489 IO depths : 1=5.8%, 2=11.9%, 4=24.8%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:42:27.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.489 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.489 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.489 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.489 filename0: (groupid=0, jobs=1): err= 0: pid=6567: Mon Jun 10 11:50:51 2024 00:42:27.489 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10001msec) 00:42:27.489 slat (nsec): min=7272, max=76675, avg=28176.96, stdev=10374.29 00:42:27.489 clat (usec): min=14569, max=71094, avg=33172.74, stdev=2541.65 00:42:27.489 lat (usec): min=14586, max=71107, avg=33200.92, stdev=2541.00 00:42:27.489 clat percentiles (usec): 00:42:27.489 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:42:27.489 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:42:27.489 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:42:27.489 | 99.00th=[34341], 99.50th=[41681], 99.90th=[70779], 99.95th=[70779], 00:42:27.489 | 99.99th=[70779] 00:42:27.489 bw ( KiB/s): min= 1667, max= 2048, per=4.17%, avg=1906.68, stdev=72.04, samples=19 00:42:27.489 iops : min= 416, max= 512, avg=476.63, stdev=18.15, samples=19 00:42:27.489 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:42:27.489 cpu : usr=96.74%, sys=2.87%, ctx=14, majf=0, minf=36 00:42:27.489 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:27.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.489 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.489 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.489 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.489 filename0: (groupid=0, jobs=1): err= 0: pid=6568: Mon Jun 10 11:50:51 2024 00:42:27.489 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10009msec) 00:42:27.489 slat (nsec): min=12886, max=75604, avg=27339.70, stdev=9703.39 00:42:27.489 clat (usec): min=20215, max=50727, avg=33144.03, stdev=1358.35 00:42:27.489 lat (usec): min=20245, max=50752, avg=33171.37, stdev=1357.52 00:42:27.489 clat percentiles (usec): 00:42:27.489 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32900], 00:42:27.489 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:27.489 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:42:27.489 | 99.00th=[34341], 99.50th=[34341], 99.90th=[50594], 99.95th=[50594], 00:42:27.489 | 99.99th=[50594] 00:42:27.489 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1913.26, stdev=29.37, samples=19 00:42:27.489 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:42:27.489 lat (msec) : 50=99.67%, 100=0.33% 00:42:27.489 cpu : usr=96.45%, sys=3.15%, ctx=10, majf=0, minf=27 00:42:27.489 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:27.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.489 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.489 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.489 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.490 filename0: (groupid=0, jobs=1): err= 0: pid=6569: Mon Jun 10 11:50:51 2024 00:42:27.490 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10011msec) 00:42:27.490 slat (nsec): min=8336, max=42348, avg=16507.64, stdev=4988.92 00:42:27.490 clat (usec): min=8712, max=57180, avg=33232.90, stdev=3806.83 00:42:27.490 lat (usec): min=8723, max=57189, avg=33249.41, stdev=3807.27 00:42:27.490 clat percentiles (usec): 00:42:27.490 | 1.00th=[17695], 5.00th=[31589], 10.00th=[32375], 20.00th=[32637], 00:42:27.490 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:27.490 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:42:27.490 | 99.00th=[49021], 99.50th=[51643], 99.90th=[55313], 99.95th=[56361], 00:42:27.490 | 99.99th=[57410] 00:42:27.490 bw ( KiB/s): min= 1776, max= 2032, per=4.18%, avg=1913.26, stdev=54.99, samples=19 00:42:27.490 iops : min= 444, max= 508, avg=478.32, stdev=13.75, samples=19 00:42:27.490 lat (msec) : 10=0.08%, 20=2.35%, 50=96.85%, 100=0.71% 00:42:27.490 cpu : usr=96.73%, sys=2.87%, ctx=11, majf=0, minf=64 00:42:27.490 IO depths : 1=4.0%, 2=9.5%, 4=22.9%, 8=54.8%, 16=8.9%, 32=0.0%, >=64=0.0% 00:42:27.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 complete : 0=0.0%, 4=93.8%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.490 filename0: (groupid=0, jobs=1): err= 0: pid=6570: Mon Jun 10 11:50:51 2024 00:42:27.490 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10003msec) 00:42:27.490 slat (nsec): min=6694, max=74909, avg=28181.04, stdev=10120.13 00:42:27.490 clat (usec): min=11490, max=72099, avg=33180.25, stdev=2701.74 00:42:27.490 lat (usec): min=11500, max=72117, avg=33208.43, stdev=2700.96 00:42:27.490 clat percentiles (usec): 00:42:27.490 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:42:27.490 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:42:27.490 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:42:27.490 | 99.00th=[34341], 99.50th=[42206], 99.90th=[71828], 99.95th=[71828], 00:42:27.490 | 99.99th=[71828] 00:42:27.490 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1906.53, stdev=72.59, samples=19 00:42:27.490 iops : min= 416, max= 512, avg=476.63, stdev=18.15, samples=19 00:42:27.490 lat (msec) : 20=0.42%, 50=99.16%, 100=0.42% 00:42:27.490 cpu : usr=96.45%, sys=3.15%, ctx=12, majf=0, minf=50 00:42:27.490 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:27.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.490 filename1: (groupid=0, jobs=1): err= 0: pid=6571: Mon Jun 10 11:50:51 2024 00:42:27.490 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10009msec) 00:42:27.490 slat (nsec): min=8591, max=74286, avg=29106.97, stdev=9935.40 00:42:27.490 clat (usec): min=20286, max=61512, avg=33121.09, stdev=1455.18 00:42:27.490 lat (usec): min=20311, max=61527, avg=33150.20, stdev=1454.46 00:42:27.490 clat percentiles (usec): 00:42:27.490 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:42:27.490 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:42:27.490 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:42:27.490 | 99.00th=[34341], 99.50th=[34866], 99.90th=[50594], 99.95th=[50594], 00:42:27.490 | 99.99th=[61604] 00:42:27.490 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1913.26, stdev=29.37, samples=19 00:42:27.490 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:42:27.490 lat (msec) : 50=99.67%, 100=0.33% 00:42:27.490 cpu : usr=96.77%, sys=2.82%, ctx=23, majf=0, minf=40 00:42:27.490 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:27.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.490 filename1: (groupid=0, jobs=1): err= 0: pid=6572: Mon Jun 10 11:50:51 2024 00:42:27.490 read: IOPS=478, BW=1912KiB/s (1958kB/s)(18.7MiB/10008msec) 00:42:27.490 slat (nsec): min=6165, max=69541, avg=25943.84, stdev=9714.55 00:42:27.490 clat (usec): min=15350, max=79686, avg=33243.14, stdev=3068.39 00:42:27.490 lat (usec): min=15366, max=79703, avg=33269.09, stdev=3067.64 00:42:27.490 clat percentiles (usec): 00:42:27.490 | 1.00th=[28705], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:42:27.490 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:27.490 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:42:27.490 | 99.00th=[39060], 99.50th=[42206], 99.90th=[79168], 99.95th=[79168], 00:42:27.490 | 99.99th=[79168] 00:42:27.490 bw ( KiB/s): min= 1667, max= 2048, per=4.17%, avg=1906.68, stdev=72.23, samples=19 00:42:27.490 iops : min= 416, max= 512, avg=476.63, stdev=18.20, samples=19 00:42:27.490 lat (msec) : 20=0.33%, 50=99.29%, 100=0.38% 00:42:27.490 cpu : usr=96.82%, sys=2.79%, ctx=11, majf=0, minf=44 00:42:27.490 IO depths : 1=5.6%, 2=11.7%, 4=24.8%, 8=51.0%, 16=6.9%, 32=0.0%, >=64=0.0% 00:42:27.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.490 filename1: (groupid=0, jobs=1): err= 0: pid=6573: Mon Jun 10 11:50:51 2024 00:42:27.490 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10009msec) 00:42:27.490 slat (nsec): min=8244, max=74769, avg=23332.78, stdev=9026.94 00:42:27.490 clat (usec): min=16043, max=61884, avg=33177.87, stdev=2193.58 00:42:27.490 lat (usec): min=16053, max=61908, avg=33201.20, stdev=2193.50 00:42:27.490 clat percentiles (usec): 00:42:27.490 | 1.00th=[25822], 5.00th=[31851], 10.00th=[32375], 20.00th=[32900], 00:42:27.490 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:27.490 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:42:27.490 | 99.00th=[40633], 99.50th=[41681], 99.90th=[50594], 99.95th=[50594], 00:42:27.490 | 99.99th=[62129] 00:42:27.490 bw ( KiB/s): min= 1792, max= 2048, per=4.20%, avg=1919.32, stdev=58.18, samples=19 00:42:27.490 iops : min= 448, max= 512, avg=479.79, stdev=14.63, samples=19 00:42:27.490 lat (msec) : 20=0.08%, 50=99.58%, 100=0.33% 00:42:27.490 cpu : usr=96.79%, sys=2.80%, ctx=12, majf=0, minf=42 00:42:27.490 IO depths : 1=4.3%, 2=9.7%, 4=23.1%, 8=54.6%, 16=8.3%, 32=0.0%, >=64=0.0% 00:42:27.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.490 filename1: (groupid=0, jobs=1): err= 0: pid=6574: Mon Jun 10 11:50:51 2024 00:42:27.490 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10002msec) 00:42:27.490 slat (nsec): min=8214, max=74759, avg=21298.80, stdev=10847.79 00:42:27.490 clat (msec): min=2, max=109, avg=33.82, stdev= 5.37 00:42:27.490 lat (msec): min=2, max=110, avg=33.84, stdev= 5.37 00:42:27.490 clat percentiles (msec): 00:42:27.490 | 1.00th=[ 19], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 33], 00:42:27.490 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:42:27.490 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 41], 00:42:27.490 | 99.00th=[ 52], 99.50th=[ 54], 99.90th=[ 91], 99.95th=[ 110], 00:42:27.490 | 99.99th=[ 110] 00:42:27.490 bw ( KiB/s): min= 1667, max= 1968, per=4.10%, avg=1875.53, stdev=67.28, samples=19 00:42:27.490 iops : min= 416, max= 492, avg=468.84, stdev=16.95, samples=19 00:42:27.490 lat (msec) : 4=0.13%, 20=0.98%, 50=97.50%, 100=1.34%, 250=0.06% 00:42:27.490 cpu : usr=96.39%, sys=3.21%, ctx=19, majf=0, minf=57 00:42:27.490 IO depths : 1=0.4%, 2=1.8%, 4=10.4%, 8=72.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:42:27.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 complete : 0=0.0%, 4=91.5%, 8=5.5%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 issued rwts: total=4716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.490 filename1: (groupid=0, jobs=1): err= 0: pid=6575: Mon Jun 10 11:50:51 2024 00:42:27.490 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10002msec) 00:42:27.490 slat (nsec): min=4502, max=69508, avg=25892.49, stdev=10414.71 00:42:27.490 clat (usec): min=14520, max=71238, avg=33326.64, stdev=3284.83 00:42:27.490 lat (usec): min=14547, max=71251, avg=33352.53, stdev=3284.23 00:42:27.490 clat percentiles (usec): 00:42:27.490 | 1.00th=[23725], 5.00th=[32113], 10.00th=[32375], 20.00th=[32900], 00:42:27.490 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:27.490 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:42:27.490 | 99.00th=[47973], 99.50th=[54264], 99.90th=[70779], 99.95th=[70779], 00:42:27.490 | 99.99th=[70779] 00:42:27.490 bw ( KiB/s): min= 1667, max= 1936, per=4.15%, avg=1899.95, stdev=62.22, samples=19 00:42:27.490 iops : min= 416, max= 484, avg=474.95, stdev=15.71, samples=19 00:42:27.490 lat (msec) : 20=0.34%, 50=98.78%, 100=0.88% 00:42:27.490 cpu : usr=96.44%, sys=3.16%, ctx=11, majf=0, minf=41 00:42:27.490 IO depths : 1=5.0%, 2=10.9%, 4=24.3%, 8=52.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:42:27.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.490 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.490 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.490 filename1: (groupid=0, jobs=1): err= 0: pid=6576: Mon Jun 10 11:50:51 2024 00:42:27.490 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10009msec) 00:42:27.490 slat (nsec): min=8602, max=77429, avg=29346.21, stdev=9556.06 00:42:27.491 clat (usec): min=20128, max=50990, avg=33108.48, stdev=1378.79 00:42:27.491 lat (usec): min=20145, max=51005, avg=33137.82, stdev=1377.95 00:42:27.491 clat percentiles (usec): 00:42:27.491 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:42:27.491 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:42:27.491 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:42:27.491 | 99.00th=[34341], 99.50th=[34866], 99.90th=[51119], 99.95th=[51119], 00:42:27.491 | 99.99th=[51119] 00:42:27.491 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1913.26, stdev=29.37, samples=19 00:42:27.491 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:42:27.491 lat (msec) : 50=99.67%, 100=0.33% 00:42:27.491 cpu : usr=96.67%, sys=2.93%, ctx=10, majf=0, minf=33 00:42:27.491 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:27.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.491 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.491 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.491 filename1: (groupid=0, jobs=1): err= 0: pid=6577: Mon Jun 10 11:50:51 2024 00:42:27.491 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10009msec) 00:42:27.491 slat (nsec): min=12551, max=75006, avg=29347.64, stdev=9818.14 00:42:27.491 clat (usec): min=20172, max=50971, avg=33093.87, stdev=1376.46 00:42:27.491 lat (usec): min=20189, max=50990, avg=33123.22, stdev=1376.03 00:42:27.491 clat percentiles (usec): 00:42:27.491 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:42:27.491 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:42:27.491 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33424], 95.00th=[33817], 00:42:27.491 | 99.00th=[34341], 99.50th=[34341], 99.90th=[51119], 99.95th=[51119], 00:42:27.491 | 99.99th=[51119] 00:42:27.491 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1913.26, stdev=29.37, samples=19 00:42:27.491 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:42:27.491 lat (msec) : 50=99.67%, 100=0.33% 00:42:27.491 cpu : usr=96.65%, sys=2.95%, ctx=12, majf=0, minf=31 00:42:27.491 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:27.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.491 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.491 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.491 filename1: (groupid=0, jobs=1): err= 0: pid=6578: Mon Jun 10 11:50:51 2024 00:42:27.491 read: IOPS=407, BW=1631KiB/s (1670kB/s)(15.9MiB/10005msec) 00:42:27.491 slat (nsec): min=6131, max=73011, avg=19600.72, stdev=9962.25 00:42:27.491 clat (usec): min=16115, max=94671, avg=39080.34, stdev=7494.58 00:42:27.491 lat (usec): min=16124, max=94688, avg=39099.94, stdev=7494.42 00:42:27.491 clat percentiles (usec): 00:42:27.491 | 1.00th=[32113], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:42:27.491 | 30.00th=[33424], 40.00th=[33817], 50.00th=[36439], 60.00th=[39060], 00:42:27.491 | 70.00th=[42730], 80.00th=[45351], 90.00th=[50070], 95.00th=[51643], 00:42:27.491 | 99.00th=[57410], 99.50th=[57934], 99.90th=[94897], 99.95th=[94897], 00:42:27.491 | 99.99th=[94897] 00:42:27.491 bw ( KiB/s): min= 1408, max= 1920, per=3.58%, avg=1637.05, stdev=218.87, samples=19 00:42:27.491 iops : min= 352, max= 480, avg=409.26, stdev=54.72, samples=19 00:42:27.491 lat (msec) : 20=0.27%, 50=91.45%, 100=8.28% 00:42:27.491 cpu : usr=95.76%, sys=3.84%, ctx=14, majf=0, minf=37 00:42:27.491 IO depths : 1=2.7%, 2=5.7%, 4=21.7%, 8=60.1%, 16=9.8%, 32=0.0%, >=64=0.0% 00:42:27.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.491 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.491 issued rwts: total=4080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.491 filename2: (groupid=0, jobs=1): err= 0: pid=6579: Mon Jun 10 11:50:51 2024 00:42:27.491 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10009msec) 00:42:27.491 slat (nsec): min=9494, max=69899, avg=25760.46, stdev=9937.73 00:42:27.491 clat (usec): min=20232, max=50727, avg=33162.07, stdev=1398.20 00:42:27.491 lat (usec): min=20267, max=50748, avg=33187.83, stdev=1397.23 00:42:27.491 clat percentiles (usec): 00:42:27.491 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32900], 00:42:27.491 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:27.491 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:42:27.491 | 99.00th=[34341], 99.50th=[34866], 99.90th=[50594], 99.95th=[50594], 00:42:27.491 | 99.99th=[50594] 00:42:27.491 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1913.26, stdev=29.37, samples=19 00:42:27.491 iops : min= 448, max= 480, avg=478.32, stdev= 7.34, samples=19 00:42:27.491 lat (msec) : 50=99.67%, 100=0.33% 00:42:27.491 cpu : usr=96.72%, sys=2.87%, ctx=13, majf=0, minf=28 00:42:27.491 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:27.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.491 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.491 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.491 filename2: (groupid=0, jobs=1): err= 0: pid=6580: Mon Jun 10 11:50:51 2024 00:42:27.491 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10013msec) 00:42:27.491 slat (nsec): min=8578, max=74080, avg=26096.49, stdev=9919.25 00:42:27.491 clat (usec): min=14504, max=90959, avg=33266.13, stdev=3393.75 00:42:27.491 lat (usec): min=14539, max=90985, avg=33292.22, stdev=3393.15 00:42:27.491 clat percentiles (usec): 00:42:27.491 | 1.00th=[27395], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:42:27.491 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:27.491 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:42:27.491 | 99.00th=[38536], 99.50th=[42206], 99.90th=[82314], 99.95th=[90702], 00:42:27.491 | 99.99th=[90702] 00:42:27.491 bw ( KiB/s): min= 1664, max= 2032, per=4.17%, avg=1906.53, stdev=71.21, samples=19 00:42:27.491 iops : min= 416, max= 508, avg=476.63, stdev=17.80, samples=19 00:42:27.491 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:42:27.491 cpu : usr=96.77%, sys=2.83%, ctx=10, majf=0, minf=40 00:42:27.491 IO depths : 1=5.2%, 2=11.0%, 4=24.0%, 8=52.6%, 16=7.3%, 32=0.0%, >=64=0.0% 00:42:27.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.491 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.491 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.491 filename2: (groupid=0, jobs=1): err= 0: pid=6581: Mon Jun 10 11:50:51 2024 00:42:27.491 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10010msec) 00:42:27.491 slat (nsec): min=4611, max=46322, avg=14781.60, stdev=5050.19 00:42:27.491 clat (usec): min=3276, max=61218, avg=32615.47, stdev=4442.51 00:42:27.491 lat (usec): min=3287, max=61227, avg=32630.26, stdev=4442.66 00:42:27.491 clat percentiles (usec): 00:42:27.491 | 1.00th=[ 7439], 5.00th=[28443], 10.00th=[32375], 20.00th=[32637], 00:42:27.491 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:27.491 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:42:27.491 | 99.00th=[46400], 99.50th=[47449], 99.90th=[58983], 99.95th=[60556], 00:42:27.491 | 99.99th=[61080] 00:42:27.491 bw ( KiB/s): min= 1792, max= 2304, per=4.27%, avg=1952.00, stdev=101.61, samples=19 00:42:27.491 iops : min= 448, max= 576, avg=488.00, stdev=25.40, samples=19 00:42:27.491 lat (msec) : 4=0.33%, 10=0.98%, 20=1.33%, 50=97.04%, 100=0.33% 00:42:27.491 cpu : usr=96.70%, sys=2.89%, ctx=14, majf=0, minf=34 00:42:27.491 IO depths : 1=4.5%, 2=10.4%, 4=24.5%, 8=52.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:42:27.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.491 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.491 issued rwts: total=4892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.491 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.492 filename2: (groupid=0, jobs=1): err= 0: pid=6582: Mon Jun 10 11:50:51 2024 00:42:27.492 read: IOPS=472, BW=1890KiB/s (1935kB/s)(18.5MiB/10006msec) 00:42:27.492 slat (nsec): min=6392, max=68205, avg=26860.17, stdev=10195.73 00:42:27.492 clat (usec): min=11705, max=84335, avg=33622.90, stdev=4522.82 00:42:27.492 lat (usec): min=11719, max=84354, avg=33649.76, stdev=4521.38 00:42:27.492 clat percentiles (usec): 00:42:27.492 | 1.00th=[24249], 5.00th=[32113], 10.00th=[32375], 20.00th=[32900], 00:42:27.492 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:27.492 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[35390], 00:42:27.492 | 99.00th=[53740], 99.50th=[60031], 99.90th=[84411], 99.95th=[84411], 00:42:27.492 | 99.99th=[84411] 00:42:27.492 bw ( KiB/s): min= 1664, max= 2048, per=4.13%, avg=1886.32, stdev=92.05, samples=19 00:42:27.492 iops : min= 416, max= 512, avg=471.58, stdev=23.01, samples=19 00:42:27.492 lat (msec) : 20=0.59%, 50=97.84%, 100=1.57% 00:42:27.492 cpu : usr=96.58%, sys=3.03%, ctx=16, majf=0, minf=26 00:42:27.492 IO depths : 1=4.8%, 2=9.9%, 4=22.9%, 8=54.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:42:27.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.492 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.492 issued rwts: total=4728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.492 filename2: (groupid=0, jobs=1): err= 0: pid=6583: Mon Jun 10 11:50:51 2024 00:42:27.492 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10010msec) 00:42:27.492 slat (nsec): min=6083, max=77289, avg=16601.38, stdev=8594.62 00:42:27.492 clat (usec): min=11377, max=54736, avg=33160.36, stdev=2705.95 00:42:27.492 lat (usec): min=11387, max=54753, avg=33176.96, stdev=2706.11 00:42:27.492 clat percentiles (usec): 00:42:27.492 | 1.00th=[24249], 5.00th=[28705], 10.00th=[32375], 20.00th=[32637], 00:42:27.492 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:27.492 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34866], 00:42:27.492 | 99.00th=[41681], 99.50th=[47449], 99.90th=[50594], 99.95th=[50594], 00:42:27.492 | 99.99th=[54789] 00:42:27.492 bw ( KiB/s): min= 1792, max= 2048, per=4.21%, avg=1923.53, stdev=58.68, samples=19 00:42:27.492 iops : min= 448, max= 512, avg=480.84, stdev=14.76, samples=19 00:42:27.492 lat (msec) : 20=0.21%, 50=99.46%, 100=0.33% 00:42:27.492 cpu : usr=96.79%, sys=2.82%, ctx=13, majf=0, minf=59 00:42:27.492 IO depths : 1=2.7%, 2=7.2%, 4=21.8%, 8=58.4%, 16=9.9%, 32=0.0%, >=64=0.0% 00:42:27.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.492 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.492 issued rwts: total=4812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.492 filename2: (groupid=0, jobs=1): err= 0: pid=6584: Mon Jun 10 11:50:51 2024 00:42:27.492 read: IOPS=478, BW=1912KiB/s (1958kB/s)(18.7MiB/10007msec) 00:42:27.492 slat (nsec): min=6350, max=77170, avg=26340.32, stdev=9731.01 00:42:27.492 clat (usec): min=14504, max=76856, avg=33224.70, stdev=2871.15 00:42:27.492 lat (usec): min=14532, max=76874, avg=33251.04, stdev=2870.22 00:42:27.492 clat percentiles (usec): 00:42:27.492 | 1.00th=[29492], 5.00th=[32113], 10.00th=[32375], 20.00th=[32900], 00:42:27.492 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:27.492 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:42:27.492 | 99.00th=[38011], 99.50th=[39584], 99.90th=[77071], 99.95th=[77071], 00:42:27.492 | 99.99th=[77071] 00:42:27.492 bw ( KiB/s): min= 1664, max= 2048, per=4.17%, avg=1906.53, stdev=72.59, samples=19 00:42:27.492 iops : min= 416, max= 512, avg=476.63, stdev=18.15, samples=19 00:42:27.492 lat (msec) : 20=0.33%, 50=99.33%, 100=0.33% 00:42:27.492 cpu : usr=97.08%, sys=2.50%, ctx=14, majf=0, minf=23 00:42:27.492 IO depths : 1=5.8%, 2=11.9%, 4=24.7%, 8=50.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:42:27.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.492 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.492 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.492 filename2: (groupid=0, jobs=1): err= 0: pid=6585: Mon Jun 10 11:50:51 2024 00:42:27.492 read: IOPS=479, BW=1918KiB/s (1964kB/s)(18.8MiB/10011msec) 00:42:27.492 slat (nsec): min=4513, max=75463, avg=25185.26, stdev=9660.80 00:42:27.492 clat (usec): min=22404, max=53737, avg=33157.82, stdev=1576.52 00:42:27.492 lat (usec): min=22413, max=53751, avg=33183.01, stdev=1576.12 00:42:27.492 clat percentiles (usec): 00:42:27.492 | 1.00th=[27395], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:42:27.492 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:27.492 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:42:27.492 | 99.00th=[41157], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:42:27.492 | 99.99th=[53740] 00:42:27.492 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1913.26, stdev=51.80, samples=19 00:42:27.492 iops : min= 448, max= 512, avg=478.32, stdev=12.95, samples=19 00:42:27.492 lat (msec) : 50=99.98%, 100=0.02% 00:42:27.492 cpu : usr=97.04%, sys=2.56%, ctx=11, majf=0, minf=31 00:42:27.492 IO depths : 1=5.6%, 2=11.6%, 4=24.5%, 8=51.4%, 16=6.9%, 32=0.0%, >=64=0.0% 00:42:27.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.492 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.492 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.492 filename2: (groupid=0, jobs=1): err= 0: pid=6586: Mon Jun 10 11:50:51 2024 00:42:27.492 read: IOPS=477, BW=1912KiB/s (1958kB/s)(18.7MiB/10009msec) 00:42:27.492 slat (nsec): min=9033, max=66875, avg=24046.43, stdev=9291.83 00:42:27.492 clat (usec): min=21664, max=71182, avg=33286.29, stdev=2410.21 00:42:27.492 lat (usec): min=21675, max=71215, avg=33310.33, stdev=2409.95 00:42:27.492 clat percentiles (usec): 00:42:27.492 | 1.00th=[31065], 5.00th=[32375], 10.00th=[32375], 20.00th=[32900], 00:42:27.492 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:27.492 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:42:27.492 | 99.00th=[34866], 99.50th=[44827], 99.90th=[70779], 99.95th=[70779], 00:42:27.492 | 99.99th=[70779] 00:42:27.492 bw ( KiB/s): min= 1664, max= 1920, per=4.17%, avg=1906.53, stdev=58.73, samples=19 00:42:27.492 iops : min= 416, max= 480, avg=476.63, stdev=14.68, samples=19 00:42:27.492 lat (msec) : 50=99.67%, 100=0.33% 00:42:27.492 cpu : usr=96.69%, sys=2.91%, ctx=9, majf=0, minf=35 00:42:27.492 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:42:27.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.492 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:27.492 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:27.492 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:27.492 00:42:27.492 Run status group 0 (all jobs): 00:42:27.492 READ: bw=44.6MiB/s (46.8MB/s), 1631KiB/s-1967KiB/s (1670kB/s-2014kB/s), io=447MiB (469MB), run=10001-10013msec 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.492 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.493 bdev_null0 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.493 [2024-06-10 11:50:51.508022] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.493 bdev_null1 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:27.493 { 00:42:27.493 "params": { 00:42:27.493 "name": "Nvme$subsystem", 00:42:27.493 "trtype": "$TEST_TRANSPORT", 00:42:27.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:27.493 "adrfam": "ipv4", 00:42:27.493 "trsvcid": "$NVMF_PORT", 00:42:27.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:27.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:27.493 "hdgst": ${hdgst:-false}, 00:42:27.493 "ddgst": ${ddgst:-false} 00:42:27.493 }, 00:42:27.493 "method": "bdev_nvme_attach_controller" 00:42:27.493 } 00:42:27.493 EOF 00:42:27.493 )") 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:27.493 11:50:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:27.493 { 00:42:27.493 "params": { 00:42:27.493 "name": "Nvme$subsystem", 00:42:27.493 "trtype": "$TEST_TRANSPORT", 00:42:27.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:27.493 "adrfam": "ipv4", 00:42:27.493 "trsvcid": "$NVMF_PORT", 00:42:27.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:27.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:27.493 "hdgst": ${hdgst:-false}, 00:42:27.493 "ddgst": ${ddgst:-false} 00:42:27.493 }, 00:42:27.493 "method": "bdev_nvme_attach_controller" 00:42:27.493 } 00:42:27.493 EOF 00:42:27.493 )") 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:42:27.494 "params": { 00:42:27.494 "name": "Nvme0", 00:42:27.494 "trtype": "tcp", 00:42:27.494 "traddr": "10.0.0.2", 00:42:27.494 "adrfam": "ipv4", 00:42:27.494 "trsvcid": "4420", 00:42:27.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:27.494 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:27.494 "hdgst": false, 00:42:27.494 "ddgst": false 00:42:27.494 }, 00:42:27.494 "method": "bdev_nvme_attach_controller" 00:42:27.494 },{ 00:42:27.494 "params": { 00:42:27.494 "name": "Nvme1", 00:42:27.494 "trtype": "tcp", 00:42:27.494 "traddr": "10.0.0.2", 00:42:27.494 "adrfam": "ipv4", 00:42:27.494 "trsvcid": "4420", 00:42:27.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:27.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:27.494 "hdgst": false, 00:42:27.494 "ddgst": false 00:42:27.494 }, 00:42:27.494 "method": "bdev_nvme_attach_controller" 00:42:27.494 }' 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:27.494 11:50:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:27.494 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:27.494 ... 00:42:27.494 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:27.494 ... 00:42:27.494 fio-3.35 00:42:27.494 Starting 4 threads 00:42:27.494 EAL: No free 2048 kB hugepages reported on node 1 00:42:32.768 00:42:32.768 filename0: (groupid=0, jobs=1): err= 0: pid=8574: Mon Jun 10 11:50:57 2024 00:42:32.768 read: IOPS=2010, BW=15.7MiB/s (16.5MB/s)(78.6MiB/5004msec) 00:42:32.768 slat (nsec): min=8080, max=70531, avg=12764.02, stdev=5717.46 00:42:32.768 clat (usec): min=2349, max=44278, avg=3940.21, stdev=1188.75 00:42:32.768 lat (usec): min=2363, max=44304, avg=3952.98, stdev=1188.72 00:42:32.768 clat percentiles (usec): 00:42:32.768 | 1.00th=[ 3064], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3720], 00:42:32.768 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3982], 60.00th=[ 3982], 00:42:32.768 | 70.00th=[ 4015], 80.00th=[ 4015], 90.00th=[ 4047], 95.00th=[ 4178], 00:42:32.768 | 99.00th=[ 5604], 99.50th=[ 5866], 99.90th=[ 6456], 99.95th=[44303], 00:42:32.768 | 99.99th=[44303] 00:42:32.768 bw ( KiB/s): min=15184, max=16432, per=25.38%, avg=16048.00, stdev=419.22, samples=9 00:42:32.768 iops : min= 1898, max= 2054, avg=2006.00, stdev=52.40, samples=9 00:42:32.768 lat (msec) : 4=66.10%, 10=33.82%, 50=0.08% 00:42:32.768 cpu : usr=95.08%, sys=4.54%, ctx=6, majf=0, minf=63 00:42:32.768 IO depths : 1=0.1%, 2=1.3%, 4=71.1%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.768 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.768 issued rwts: total=10062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.768 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:32.768 filename0: (groupid=0, jobs=1): err= 0: pid=8575: Mon Jun 10 11:50:57 2024 00:42:32.768 read: IOPS=1941, BW=15.2MiB/s (15.9MB/s)(75.9MiB/5001msec) 00:42:32.768 slat (nsec): min=7748, max=58557, avg=12781.37, stdev=5500.96 00:42:32.768 clat (usec): min=2216, max=45763, avg=4084.25, stdev=1326.86 00:42:32.768 lat (usec): min=2225, max=45786, avg=4097.03, stdev=1326.85 00:42:32.768 clat percentiles (usec): 00:42:32.768 | 1.00th=[ 3326], 5.00th=[ 3621], 10.00th=[ 3687], 20.00th=[ 3752], 00:42:32.768 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 3949], 60.00th=[ 3982], 00:42:32.768 | 70.00th=[ 3982], 80.00th=[ 4015], 90.00th=[ 4359], 95.00th=[ 5735], 00:42:32.768 | 99.00th=[ 6128], 99.50th=[ 6259], 99.90th=[ 6915], 99.95th=[45876], 00:42:32.768 | 99.99th=[45876] 00:42:32.768 bw ( KiB/s): min=13803, max=16304, per=24.39%, avg=15421.67, stdev=750.68, samples=9 00:42:32.768 iops : min= 1725, max= 2038, avg=1927.67, stdev=93.94, samples=9 00:42:32.768 lat (msec) : 4=71.67%, 10=28.25%, 50=0.08% 00:42:32.768 cpu : usr=95.38%, sys=4.26%, ctx=11, majf=0, minf=74 00:42:32.768 IO depths : 1=0.1%, 2=0.1%, 4=73.7%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.768 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.768 issued rwts: total=9710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.768 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:32.768 filename1: (groupid=0, jobs=1): err= 0: pid=8576: Mon Jun 10 11:50:57 2024 00:42:32.768 read: IOPS=1974, BW=15.4MiB/s (16.2MB/s)(77.2MiB/5002msec) 00:42:32.768 slat (nsec): min=5556, max=57590, avg=11612.27, stdev=7058.32 00:42:32.768 clat (usec): min=1478, max=6694, avg=4021.04, stdev=614.91 00:42:32.768 lat (usec): min=1488, max=6701, avg=4032.65, stdev=614.07 00:42:32.768 clat percentiles (usec): 00:42:32.768 | 1.00th=[ 2474], 5.00th=[ 3523], 10.00th=[ 3654], 20.00th=[ 3720], 00:42:32.768 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 3949], 60.00th=[ 3982], 00:42:32.768 | 70.00th=[ 4015], 80.00th=[ 4047], 90.00th=[ 4490], 95.00th=[ 5735], 00:42:32.768 | 99.00th=[ 6194], 99.50th=[ 6194], 99.90th=[ 6390], 99.95th=[ 6456], 00:42:32.768 | 99.99th=[ 6718] 00:42:32.768 bw ( KiB/s): min=15184, max=16512, per=25.11%, avg=15872.00, stdev=417.23, samples=9 00:42:32.768 iops : min= 1898, max= 2064, avg=1984.00, stdev=52.15, samples=9 00:42:32.768 lat (msec) : 2=0.47%, 4=63.90%, 10=35.64% 00:42:32.768 cpu : usr=95.60%, sys=4.06%, ctx=9, majf=0, minf=115 00:42:32.768 IO depths : 1=0.1%, 2=0.5%, 4=68.7%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.768 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.768 issued rwts: total=9877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.768 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:32.768 filename1: (groupid=0, jobs=1): err= 0: pid=8577: Mon Jun 10 11:50:57 2024 00:42:32.768 read: IOPS=1978, BW=15.5MiB/s (16.2MB/s)(77.3MiB/5001msec) 00:42:32.768 slat (nsec): min=5840, max=61774, avg=14733.88, stdev=8309.60 00:42:32.768 clat (usec): min=1295, max=6565, avg=4003.53, stdev=537.19 00:42:32.768 lat (usec): min=1303, max=6583, avg=4018.27, stdev=536.48 00:42:32.768 clat percentiles (usec): 00:42:32.768 | 1.00th=[ 2704], 5.00th=[ 3556], 10.00th=[ 3654], 20.00th=[ 3752], 00:42:32.768 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 3982], 00:42:32.768 | 70.00th=[ 3982], 80.00th=[ 4015], 90.00th=[ 4555], 95.00th=[ 5407], 00:42:32.768 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6325], 99.95th=[ 6456], 00:42:32.768 | 99.99th=[ 6587] 00:42:32.768 bw ( KiB/s): min=15104, max=16782, per=25.11%, avg=15873.56, stdev=624.79, samples=9 00:42:32.768 iops : min= 1888, max= 2097, avg=1984.11, stdev=77.96, samples=9 00:42:32.768 lat (msec) : 2=0.35%, 4=74.06%, 10=25.59% 00:42:32.768 cpu : usr=95.12%, sys=4.48%, ctx=7, majf=0, minf=97 00:42:32.768 IO depths : 1=0.1%, 2=2.3%, 4=67.4%, 8=30.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.768 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.768 issued rwts: total=9895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.768 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:32.768 00:42:32.768 Run status group 0 (all jobs): 00:42:32.768 READ: bw=61.7MiB/s (64.7MB/s), 15.2MiB/s-15.7MiB/s (15.9MB/s-16.5MB/s), io=309MiB (324MB), run=5001-5004msec 00:42:33.027 11:50:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:33.028 00:42:33.028 real 0m24.751s 00:42:33.028 user 5m2.126s 00:42:33.028 sys 0m10.522s 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:33.028 11:50:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:33.028 ************************************ 00:42:33.028 END TEST fio_dif_rand_params 00:42:33.028 ************************************ 00:42:33.028 11:50:58 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:33.028 11:50:58 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:42:33.028 11:50:58 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:33.028 11:50:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:33.287 ************************************ 00:42:33.288 START TEST fio_dif_digest 00:42:33.288 ************************************ 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:33.288 bdev_null0 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:33.288 [2024-06-10 11:50:58.183868] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:33.288 { 00:42:33.288 "params": { 00:42:33.288 "name": "Nvme$subsystem", 00:42:33.288 "trtype": "$TEST_TRANSPORT", 00:42:33.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:33.288 "adrfam": "ipv4", 00:42:33.288 "trsvcid": "$NVMF_PORT", 00:42:33.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:33.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:33.288 "hdgst": ${hdgst:-false}, 00:42:33.288 "ddgst": ${ddgst:-false} 00:42:33.288 }, 00:42:33.288 "method": "bdev_nvme_attach_controller" 00:42:33.288 } 00:42:33.288 EOF 00:42:33.288 )") 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:42:33.288 "params": { 00:42:33.288 "name": "Nvme0", 00:42:33.288 "trtype": "tcp", 00:42:33.288 "traddr": "10.0.0.2", 00:42:33.288 "adrfam": "ipv4", 00:42:33.288 "trsvcid": "4420", 00:42:33.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:33.288 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:33.288 "hdgst": true, 00:42:33.288 "ddgst": true 00:42:33.288 }, 00:42:33.288 "method": "bdev_nvme_attach_controller" 00:42:33.288 }' 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:33.288 11:50:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.547 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:33.547 ... 00:42:33.547 fio-3.35 00:42:33.547 Starting 3 threads 00:42:33.806 EAL: No free 2048 kB hugepages reported on node 1 00:42:46.109 00:42:46.109 filename0: (groupid=0, jobs=1): err= 0: pid=9788: Mon Jun 10 11:51:09 2024 00:42:46.109 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(264MiB/10010msec) 00:42:46.109 slat (nsec): min=2977, max=26473, avg=10829.76, stdev=2362.19 00:42:46.109 clat (usec): min=6228, max=95470, avg=14230.35, stdev=7348.21 00:42:46.109 lat (usec): min=6235, max=95480, avg=14241.18, stdev=7348.28 00:42:46.109 clat percentiles (usec): 00:42:46.109 | 1.00th=[ 7635], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11994], 00:42:46.109 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13566], 60.00th=[13829], 00:42:46.109 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15139], 95.00th=[15795], 00:42:46.109 | 99.00th=[55313], 99.50th=[55837], 99.90th=[93848], 99.95th=[94897], 00:42:46.109 | 99.99th=[95945] 00:42:46.109 bw ( KiB/s): min=19712, max=30976, per=35.87%, avg=26944.00, stdev=2974.89, samples=20 00:42:46.109 iops : min= 154, max= 242, avg=210.50, stdev=23.24, samples=20 00:42:46.109 lat (msec) : 10=7.07%, 20=90.28%, 50=0.24%, 100=2.42% 00:42:46.109 cpu : usr=92.17%, sys=7.44%, ctx=16, majf=0, minf=92 00:42:46.109 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:46.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.109 issued rwts: total=2108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:46.109 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:46.109 filename0: (groupid=0, jobs=1): err= 0: pid=9789: Mon Jun 10 11:51:09 2024 00:42:46.109 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(259MiB/10005msec) 00:42:46.109 slat (nsec): min=8577, max=36514, avg=13644.47, stdev=2134.96 00:42:46.109 clat (usec): min=6498, max=93298, avg=14461.47, stdev=6505.95 00:42:46.109 lat (usec): min=6509, max=93312, avg=14475.12, stdev=6506.04 00:42:46.109 clat percentiles (usec): 00:42:46.109 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[12125], 00:42:46.109 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13960], 60.00th=[14353], 00:42:46.109 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15795], 95.00th=[16319], 00:42:46.109 | 99.00th=[55313], 99.50th=[55837], 99.90th=[58459], 99.95th=[91751], 00:42:46.109 | 99.99th=[92799] 00:42:46.109 bw ( KiB/s): min=22528, max=30208, per=35.28%, avg=26498.60, stdev=2009.26, samples=20 00:42:46.109 iops : min= 176, max= 236, avg=207.00, stdev=15.70, samples=20 00:42:46.109 lat (msec) : 10=5.98%, 20=91.94%, 50=0.05%, 100=2.03% 00:42:46.109 cpu : usr=90.29%, sys=9.29%, ctx=21, majf=0, minf=154 00:42:46.109 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:46.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.109 issued rwts: total=2073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:46.109 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:46.109 filename0: (groupid=0, jobs=1): err= 0: pid=9790: Mon Jun 10 11:51:09 2024 00:42:46.109 read: IOPS=170, BW=21.3MiB/s (22.4MB/s)(214MiB/10048msec) 00:42:46.109 slat (nsec): min=8557, max=27762, avg=13667.03, stdev=1994.45 00:42:46.109 clat (usec): min=8652, max=96353, avg=17534.58, stdev=8859.09 00:42:46.109 lat (usec): min=8661, max=96367, avg=17548.24, stdev=8859.06 00:42:46.109 clat percentiles (usec): 00:42:46.109 | 1.00th=[10028], 5.00th=[11863], 10.00th=[12911], 20.00th=[14484], 00:42:46.109 | 30.00th=[15008], 40.00th=[15533], 50.00th=[15926], 60.00th=[16450], 00:42:46.109 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18482], 95.00th=[20317], 00:42:46.109 | 99.00th=[57934], 99.50th=[58459], 99.90th=[59507], 99.95th=[95945], 00:42:46.109 | 99.99th=[95945] 00:42:46.109 bw ( KiB/s): min=17408, max=25344, per=29.18%, avg=21913.60, stdev=2041.93, samples=20 00:42:46.109 iops : min= 136, max= 198, avg=171.20, stdev=15.95, samples=20 00:42:46.109 lat (msec) : 10=1.05%, 20=93.53%, 50=0.82%, 100=4.61% 00:42:46.109 cpu : usr=91.03%, sys=8.58%, ctx=17, majf=0, minf=87 00:42:46.109 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:46.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:46.109 issued rwts: total=1715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:46.109 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:46.109 00:42:46.109 Run status group 0 (all jobs): 00:42:46.109 READ: bw=73.3MiB/s (76.9MB/s), 21.3MiB/s-26.3MiB/s (22.4MB/s-27.6MB/s), io=737MiB (773MB), run=10005-10048msec 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:46.109 00:42:46.109 real 0m11.335s 00:42:46.109 user 0m39.289s 00:42:46.109 sys 0m2.916s 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:46.109 11:51:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:46.109 ************************************ 00:42:46.109 END TEST fio_dif_digest 00:42:46.109 ************************************ 00:42:46.109 11:51:09 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:46.109 11:51:09 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:46.109 11:51:09 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:46.109 11:51:09 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:42:46.109 11:51:09 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:46.109 11:51:09 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:42:46.109 11:51:09 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:46.110 11:51:09 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:46.110 rmmod nvme_tcp 00:42:46.110 rmmod nvme_fabrics 00:42:46.110 rmmod nvme_keyring 00:42:46.110 11:51:09 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:46.110 11:51:09 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:42:46.110 11:51:09 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:42:46.110 11:51:09 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 537 ']' 00:42:46.110 11:51:09 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 537 00:42:46.110 11:51:09 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 537 ']' 00:42:46.110 11:51:09 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 537 00:42:46.110 11:51:09 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:42:46.110 11:51:09 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:46.110 11:51:09 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 537 00:42:46.110 11:51:09 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:42:46.110 11:51:09 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:42:46.110 11:51:09 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 537' 00:42:46.110 killing process with pid 537 00:42:46.110 11:51:09 nvmf_dif -- common/autotest_common.sh@968 -- # kill 537 00:42:46.110 11:51:09 nvmf_dif -- common/autotest_common.sh@973 -- # wait 537 00:42:46.110 11:51:09 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:42:46.110 11:51:09 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:48.668 Waiting for block devices as requested 00:42:48.927 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:48.927 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:48.927 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:49.187 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:49.187 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:49.187 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:49.446 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:49.446 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:49.446 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:42:49.706 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:42:49.706 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:42:49.706 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:42:49.965 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:42:49.965 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:42:49.965 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:42:50.224 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:42:50.224 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:42:50.224 11:51:15 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:50.224 11:51:15 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:50.224 11:51:15 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:50.224 11:51:15 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:50.224 11:51:15 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:50.224 11:51:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:50.224 11:51:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:52.762 11:51:17 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:52.762 00:42:52.762 real 1m20.588s 00:42:52.762 user 7m34.031s 00:42:52.762 sys 0m33.496s 00:42:52.762 11:51:17 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:52.762 11:51:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:52.762 ************************************ 00:42:52.762 END TEST nvmf_dif 00:42:52.762 ************************************ 00:42:52.762 11:51:17 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:52.762 11:51:17 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:42:52.762 11:51:17 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:52.762 11:51:17 -- common/autotest_common.sh@10 -- # set +x 00:42:52.762 ************************************ 00:42:52.762 START TEST nvmf_abort_qd_sizes 00:42:52.762 ************************************ 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:52.762 * Looking for test storage... 00:42:52.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:42:52.762 11:51:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:00.887 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:00.887 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:00.887 Found net devices under 0000:af:00.0: cvl_0_0 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:00.887 Found net devices under 0000:af:00.1: cvl_0_1 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:43:00.887 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:00.888 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:00.888 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:01.148 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:43:01.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:01.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:43:01.148 00:43:01.148 --- 10.0.0.2 ping statistics --- 00:43:01.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:01.148 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:43:01.148 11:51:25 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:01.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:01.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:43:01.148 00:43:01.148 --- 10.0.0.1 ping statistics --- 00:43:01.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:01.148 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:43:01.148 11:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:01.148 11:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:43:01.148 11:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:43:01.148 11:51:26 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:05.344 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:05.344 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:06.722 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:43:06.722 11:51:31 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:06.722 11:51:31 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:06.722 11:51:31 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:06.722 11:51:31 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:06.722 11:51:31 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:06.722 11:51:31 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:06.722 11:51:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:06.722 11:51:31 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:06.722 11:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:43:06.722 11:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:06.722 11:51:31 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=20000 00:43:06.723 11:51:31 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:06.723 11:51:31 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 20000 00:43:06.723 11:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 20000 ']' 00:43:06.723 11:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:06.723 11:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:06.723 11:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:06.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:06.723 11:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:06.723 11:51:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:06.723 [2024-06-10 11:51:31.754635] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:43:06.723 [2024-06-10 11:51:31.754697] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:06.723 EAL: No free 2048 kB hugepages reported on node 1 00:43:06.982 [2024-06-10 11:51:31.881912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:06.982 [2024-06-10 11:51:31.970503] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:06.982 [2024-06-10 11:51:31.970551] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:06.982 [2024-06-10 11:51:31.970565] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:06.982 [2024-06-10 11:51:31.970584] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:06.982 [2024-06-10 11:51:31.970594] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:06.982 [2024-06-10 11:51:31.970643] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:06.982 [2024-06-10 11:51:31.970735] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:43:06.982 [2024-06-10 11:51:31.970850] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:06.982 [2024-06-10 11:51:31.970849] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:43:07.553 11:51:32 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:43:07.813 11:51:32 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:43:07.813 11:51:32 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:43:07.813 11:51:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:43:07.813 11:51:32 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:43:07.813 11:51:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:43:07.813 11:51:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:07.813 11:51:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:43:07.813 11:51:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:07.813 11:51:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:43:07.813 11:51:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:07.813 11:51:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:07.813 ************************************ 00:43:07.813 START TEST spdk_target_abort 00:43:07.813 ************************************ 00:43:07.813 11:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:43:07.813 11:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:07.813 11:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:43:07.813 11:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:07.813 11:51:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:11.102 spdk_targetn1 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:11.102 [2024-06-10 11:51:35.583700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:11.102 [2024-06-10 11:51:35.620002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:11.102 11:51:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:11.102 EAL: No free 2048 kB hugepages reported on node 1 00:43:14.396 Initializing NVMe Controllers 00:43:14.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:14.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:14.396 Initialization complete. Launching workers. 00:43:14.396 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10869, failed: 0 00:43:14.396 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1805, failed to submit 9064 00:43:14.396 success 842, unsuccess 963, failed 0 00:43:14.396 11:51:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:14.396 11:51:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:14.396 EAL: No free 2048 kB hugepages reported on node 1 00:43:17.687 Initializing NVMe Controllers 00:43:17.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:17.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:17.687 Initialization complete. Launching workers. 00:43:17.687 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8646, failed: 0 00:43:17.687 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1243, failed to submit 7403 00:43:17.687 success 315, unsuccess 928, failed 0 00:43:17.687 11:51:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:17.687 11:51:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:17.687 EAL: No free 2048 kB hugepages reported on node 1 00:43:20.977 Initializing NVMe Controllers 00:43:20.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:20.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:20.977 Initialization complete. Launching workers. 00:43:20.977 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37401, failed: 0 00:43:20.977 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2688, failed to submit 34713 00:43:20.977 success 586, unsuccess 2102, failed 0 00:43:20.977 11:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:43:20.977 11:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:20.977 11:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:20.977 11:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:20.977 11:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:43:20.977 11:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:20.977 11:51:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:22.357 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:22.357 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 20000 00:43:22.357 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 20000 ']' 00:43:22.357 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 20000 00:43:22.357 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:43:22.357 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:22.357 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 20000 00:43:22.357 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:43:22.357 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:43:22.357 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 20000' 00:43:22.357 killing process with pid 20000 00:43:22.357 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 20000 00:43:22.357 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 20000 00:43:22.617 00:43:22.617 real 0m14.797s 00:43:22.617 user 0m58.430s 00:43:22.617 sys 0m2.800s 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:22.617 ************************************ 00:43:22.617 END TEST spdk_target_abort 00:43:22.617 ************************************ 00:43:22.617 11:51:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:43:22.617 11:51:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:43:22.617 11:51:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:22.617 11:51:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:22.617 ************************************ 00:43:22.617 START TEST kernel_target_abort 00:43:22.617 ************************************ 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:43:22.617 11:51:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:26.812 Waiting for block devices as requested 00:43:26.812 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:26.812 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:26.812 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:26.812 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:26.812 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:27.145 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:27.145 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:27.145 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:27.145 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:27.438 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:27.438 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:27.438 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:27.438 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:27.698 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:27.698 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:27.698 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:27.957 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:43:27.957 11:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:43:27.957 11:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:43:27.958 11:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:43:27.958 11:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:43:27.958 11:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:27.958 11:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:43:27.958 11:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:43:27.958 11:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:43:27.958 11:51:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:43:27.958 No valid GPT data, bailing 00:43:27.958 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:43:27.958 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:43:27.958 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:43:28.217 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:43:28.217 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:43:28.217 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:28.217 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:28.217 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:43:28.217 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:43:28.217 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:43:28.217 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:43:28.217 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:43:28.217 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:43:28.217 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:43:28.217 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:43:28.217 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 --hostid=809b5fbc-4be7-e711-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:43:28.218 00:43:28.218 Discovery Log Number of Records 2, Generation counter 2 00:43:28.218 =====Discovery Log Entry 0====== 00:43:28.218 trtype: tcp 00:43:28.218 adrfam: ipv4 00:43:28.218 subtype: current discovery subsystem 00:43:28.218 treq: not specified, sq flow control disable supported 00:43:28.218 portid: 1 00:43:28.218 trsvcid: 4420 00:43:28.218 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:43:28.218 traddr: 10.0.0.1 00:43:28.218 eflags: none 00:43:28.218 sectype: none 00:43:28.218 =====Discovery Log Entry 1====== 00:43:28.218 trtype: tcp 00:43:28.218 adrfam: ipv4 00:43:28.218 subtype: nvme subsystem 00:43:28.218 treq: not specified, sq flow control disable supported 00:43:28.218 portid: 1 00:43:28.218 trsvcid: 4420 00:43:28.218 subnqn: nqn.2016-06.io.spdk:testnqn 00:43:28.218 traddr: 10.0.0.1 00:43:28.218 eflags: none 00:43:28.218 sectype: none 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:28.218 11:51:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:28.218 EAL: No free 2048 kB hugepages reported on node 1 00:43:31.508 Initializing NVMe Controllers 00:43:31.508 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:31.508 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:31.508 Initialization complete. Launching workers. 00:43:31.508 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 50654, failed: 0 00:43:31.508 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 50654, failed to submit 0 00:43:31.508 success 0, unsuccess 50654, failed 0 00:43:31.508 11:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:31.508 11:51:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:31.509 EAL: No free 2048 kB hugepages reported on node 1 00:43:34.799 Initializing NVMe Controllers 00:43:34.799 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:34.799 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:34.799 Initialization complete. Launching workers. 00:43:34.799 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87690, failed: 0 00:43:34.799 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21998, failed to submit 65692 00:43:34.799 success 0, unsuccess 21998, failed 0 00:43:34.799 11:51:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:34.799 11:51:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:34.799 EAL: No free 2048 kB hugepages reported on node 1 00:43:38.087 Initializing NVMe Controllers 00:43:38.087 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:38.087 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:38.087 Initialization complete. Launching workers. 00:43:38.087 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84626, failed: 0 00:43:38.087 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21150, failed to submit 63476 00:43:38.087 success 0, unsuccess 21150, failed 0 00:43:38.087 11:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:43:38.087 11:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:43:38.087 11:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:43:38.087 11:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:38.087 11:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:38.087 11:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:43:38.087 11:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:38.087 11:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:43:38.087 11:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:43:38.087 11:52:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:41.378 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:41.378 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:41.378 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:41.378 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:41.378 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:41.378 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:41.637 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:41.637 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:41.637 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:41.637 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:41.637 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:41.637 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:41.637 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:41.637 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:41.637 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:41.637 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:43.544 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:43:43.544 00:43:43.544 real 0m20.690s 00:43:43.544 user 0m8.593s 00:43:43.544 sys 0m6.981s 00:43:43.544 11:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:43.544 11:52:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:43.544 ************************************ 00:43:43.544 END TEST kernel_target_abort 00:43:43.544 ************************************ 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:43.544 rmmod nvme_tcp 00:43:43.544 rmmod nvme_fabrics 00:43:43.544 rmmod nvme_keyring 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 20000 ']' 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 20000 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 20000 ']' 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 20000 00:43:43.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (20000) - No such process 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 20000 is not found' 00:43:43.544 Process with pid 20000 is not found 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:43:43.544 11:52:08 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:47.735 Waiting for block devices as requested 00:43:47.735 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:47.735 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:47.735 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:47.735 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:47.735 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:47.735 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:47.994 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:47.994 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:47.994 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:48.254 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:48.254 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:48.254 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:48.513 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:48.513 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:48.513 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:48.772 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:48.772 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:43:49.031 11:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:49.031 11:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:49.031 11:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:49.031 11:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:49.031 11:52:13 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:49.031 11:52:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:49.031 11:52:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:50.938 11:52:15 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:43:50.938 00:43:50.938 real 0m58.487s 00:43:50.938 user 1m12.722s 00:43:50.938 sys 0m22.390s 00:43:50.938 11:52:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:50.938 11:52:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:50.938 ************************************ 00:43:50.938 END TEST nvmf_abort_qd_sizes 00:43:50.938 ************************************ 00:43:50.938 11:52:16 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:50.938 11:52:16 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:43:50.938 11:52:16 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:50.938 11:52:16 -- common/autotest_common.sh@10 -- # set +x 00:43:51.198 ************************************ 00:43:51.198 START TEST keyring_file 00:43:51.198 ************************************ 00:43:51.198 11:52:16 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:51.198 * Looking for test storage... 00:43:51.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:51.198 11:52:16 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:51.198 11:52:16 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:51.198 11:52:16 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:51.198 11:52:16 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:51.198 11:52:16 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:51.198 11:52:16 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.198 11:52:16 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.198 11:52:16 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.198 11:52:16 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:51.198 11:52:16 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@47 -- # : 0 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:51.198 11:52:16 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:51.198 11:52:16 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:51.198 11:52:16 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:51.198 11:52:16 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:51.198 11:52:16 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:51.198 11:52:16 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:51.199 11:52:16 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:51.199 11:52:16 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lorm47WBwv 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:51.199 11:52:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:51.199 11:52:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:43:51.199 11:52:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:43:51.199 11:52:16 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:43:51.199 11:52:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:43:51.199 11:52:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lorm47WBwv 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lorm47WBwv 00:43:51.199 11:52:16 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.lorm47WBwv 00:43:51.199 11:52:16 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OksynNP9En 00:43:51.199 11:52:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:51.199 11:52:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:51.199 11:52:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:43:51.199 11:52:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:43:51.199 11:52:16 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:43:51.199 11:52:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:43:51.199 11:52:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:43:51.459 11:52:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OksynNP9En 00:43:51.459 11:52:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OksynNP9En 00:43:51.459 11:52:16 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.OksynNP9En 00:43:51.459 11:52:16 keyring_file -- keyring/file.sh@30 -- # tgtpid=30198 00:43:51.459 11:52:16 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:51.459 11:52:16 keyring_file -- keyring/file.sh@32 -- # waitforlisten 30198 00:43:51.459 11:52:16 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 30198 ']' 00:43:51.459 11:52:16 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:51.459 11:52:16 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:51.459 11:52:16 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:51.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:51.459 11:52:16 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:51.459 11:52:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:51.459 [2024-06-10 11:52:16.375049] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:43:51.459 [2024-06-10 11:52:16.375114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid30198 ] 00:43:51.459 EAL: No free 2048 kB hugepages reported on node 1 00:43:51.459 [2024-06-10 11:52:16.494965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:51.717 [2024-06-10 11:52:16.589289] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:43:52.287 11:52:17 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:52.287 [2024-06-10 11:52:17.275859] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:52.287 null0 00:43:52.287 [2024-06-10 11:52:17.307908] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:52.287 [2024-06-10 11:52:17.308265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:52.287 [2024-06-10 11:52:17.315927] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:52.287 11:52:17 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:52.287 [2024-06-10 11:52:17.331982] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:52.287 request: 00:43:52.287 { 00:43:52.287 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:52.287 "secure_channel": false, 00:43:52.287 "listen_address": { 00:43:52.287 "trtype": "tcp", 00:43:52.287 "traddr": "127.0.0.1", 00:43:52.287 "trsvcid": "4420" 00:43:52.287 }, 00:43:52.287 "method": "nvmf_subsystem_add_listener", 00:43:52.287 "req_id": 1 00:43:52.287 } 00:43:52.287 Got JSON-RPC error response 00:43:52.287 response: 00:43:52.287 { 00:43:52.287 "code": -32602, 00:43:52.287 "message": "Invalid parameters" 00:43:52.287 } 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:43:52.287 11:52:17 keyring_file -- keyring/file.sh@46 -- # bperfpid=30463 00:43:52.287 11:52:17 keyring_file -- keyring/file.sh@48 -- # waitforlisten 30463 /var/tmp/bperf.sock 00:43:52.287 11:52:17 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 30463 ']' 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:52.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:52.287 11:52:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:52.287 [2024-06-10 11:52:17.389883] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:43:52.287 [2024-06-10 11:52:17.389942] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid30463 ] 00:43:52.547 EAL: No free 2048 kB hugepages reported on node 1 00:43:52.547 [2024-06-10 11:52:17.499872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:52.547 [2024-06-10 11:52:17.586500] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:53.485 11:52:18 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:53.485 11:52:18 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:43:53.485 11:52:18 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lorm47WBwv 00:43:53.485 11:52:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lorm47WBwv 00:43:53.485 11:52:18 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.OksynNP9En 00:43:53.485 11:52:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.OksynNP9En 00:43:53.745 11:52:18 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:43:53.745 11:52:18 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:43:53.745 11:52:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:53.745 11:52:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:53.745 11:52:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:54.004 11:52:18 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.lorm47WBwv == \/\t\m\p\/\t\m\p\.\l\o\r\m\4\7\W\B\w\v ]] 00:43:54.004 11:52:18 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:43:54.004 11:52:18 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:54.004 11:52:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:54.004 11:52:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:54.004 11:52:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:54.264 11:52:19 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.OksynNP9En == \/\t\m\p\/\t\m\p\.\O\k\s\y\n\N\P\9\E\n ]] 00:43:54.264 11:52:19 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:43:54.264 11:52:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:54.264 11:52:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:54.264 11:52:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:54.264 11:52:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:54.264 11:52:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:54.526 11:52:19 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:43:54.526 11:52:19 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:43:54.526 11:52:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:54.526 11:52:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:54.526 11:52:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:54.526 11:52:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:54.526 11:52:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:54.785 11:52:19 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:54.785 11:52:19 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:54.785 11:52:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:55.045 [2024-06-10 11:52:19.890756] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:55.045 nvme0n1 00:43:55.045 11:52:19 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:43:55.045 11:52:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:55.045 11:52:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:55.045 11:52:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:55.045 11:52:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:55.045 11:52:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:55.304 11:52:20 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:43:55.304 11:52:20 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:43:55.304 11:52:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:55.304 11:52:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:55.304 11:52:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:55.304 11:52:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:55.304 11:52:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:55.563 11:52:20 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:43:55.563 11:52:20 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:55.563 Running I/O for 1 seconds... 00:43:56.501 00:43:56.502 Latency(us) 00:43:56.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:56.502 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:56.502 nvme0n1 : 1.01 9636.69 37.64 0.00 0.00 13223.45 6658.46 22020.10 00:43:56.502 =================================================================================================================== 00:43:56.502 Total : 9636.69 37.64 0.00 0.00 13223.45 6658.46 22020.10 00:43:56.502 0 00:43:56.502 11:52:21 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:56.502 11:52:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:56.761 11:52:21 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:43:56.761 11:52:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:56.761 11:52:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:56.761 11:52:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:56.761 11:52:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:56.761 11:52:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:57.020 11:52:22 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:43:57.020 11:52:22 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:43:57.020 11:52:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:57.021 11:52:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:57.021 11:52:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:57.021 11:52:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:57.021 11:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:57.279 11:52:22 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:43:57.280 11:52:22 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:57.280 11:52:22 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:43:57.280 11:52:22 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:57.280 11:52:22 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:43:57.280 11:52:22 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:57.280 11:52:22 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:43:57.280 11:52:22 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:57.280 11:52:22 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:57.280 11:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:57.539 [2024-06-10 11:52:22.487527] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:57.539 [2024-06-10 11:52:22.488176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1821fc0 (107): Transport endpoint is not connected 00:43:57.539 [2024-06-10 11:52:22.489169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1821fc0 (9): Bad file descriptor 00:43:57.539 [2024-06-10 11:52:22.490169] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:43:57.539 [2024-06-10 11:52:22.490185] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:57.539 [2024-06-10 11:52:22.490198] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:43:57.539 request: 00:43:57.539 { 00:43:57.539 "name": "nvme0", 00:43:57.539 "trtype": "tcp", 00:43:57.539 "traddr": "127.0.0.1", 00:43:57.539 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:57.539 "adrfam": "ipv4", 00:43:57.539 "trsvcid": "4420", 00:43:57.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:57.539 "psk": "key1", 00:43:57.539 "method": "bdev_nvme_attach_controller", 00:43:57.539 "req_id": 1 00:43:57.539 } 00:43:57.539 Got JSON-RPC error response 00:43:57.539 response: 00:43:57.539 { 00:43:57.539 "code": -5, 00:43:57.539 "message": "Input/output error" 00:43:57.539 } 00:43:57.539 11:52:22 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:43:57.539 11:52:22 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:43:57.539 11:52:22 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:43:57.539 11:52:22 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:43:57.539 11:52:22 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:43:57.539 11:52:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:57.539 11:52:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:57.539 11:52:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:57.539 11:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:57.539 11:52:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:57.897 11:52:22 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:43:57.897 11:52:22 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:43:57.897 11:52:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:57.897 11:52:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:57.897 11:52:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:57.898 11:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:57.898 11:52:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:57.898 11:52:22 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:43:57.898 11:52:22 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:43:57.898 11:52:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:58.157 11:52:23 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:43:58.157 11:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:58.416 11:52:23 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:43:58.416 11:52:23 keyring_file -- keyring/file.sh@77 -- # jq length 00:43:58.416 11:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:58.676 11:52:23 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:43:58.676 11:52:23 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.lorm47WBwv 00:43:58.676 11:52:23 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.lorm47WBwv 00:43:58.676 11:52:23 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:43:58.676 11:52:23 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.lorm47WBwv 00:43:58.676 11:52:23 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:43:58.676 11:52:23 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:58.676 11:52:23 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:43:58.676 11:52:23 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:58.676 11:52:23 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lorm47WBwv 00:43:58.676 11:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lorm47WBwv 00:43:58.935 [2024-06-10 11:52:23.836861] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lorm47WBwv': 0100660 00:43:58.935 [2024-06-10 11:52:23.836893] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:58.935 request: 00:43:58.935 { 00:43:58.935 "name": "key0", 00:43:58.935 "path": "/tmp/tmp.lorm47WBwv", 00:43:58.935 "method": "keyring_file_add_key", 00:43:58.935 "req_id": 1 00:43:58.935 } 00:43:58.935 Got JSON-RPC error response 00:43:58.935 response: 00:43:58.935 { 00:43:58.935 "code": -1, 00:43:58.935 "message": "Operation not permitted" 00:43:58.935 } 00:43:58.936 11:52:23 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:43:58.936 11:52:23 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:43:58.936 11:52:23 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:43:58.936 11:52:23 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:43:58.936 11:52:23 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.lorm47WBwv 00:43:58.936 11:52:23 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lorm47WBwv 00:43:58.936 11:52:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lorm47WBwv 00:43:58.936 11:52:24 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.lorm47WBwv 00:43:59.195 11:52:24 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:43:59.195 11:52:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:59.195 11:52:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:59.195 11:52:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:59.195 11:52:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:59.195 11:52:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:59.195 11:52:24 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:43:59.195 11:52:24 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:59.195 11:52:24 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:43:59.195 11:52:24 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:59.195 11:52:24 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:43:59.195 11:52:24 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:59.195 11:52:24 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:43:59.195 11:52:24 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:43:59.195 11:52:24 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:59.195 11:52:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:59.454 [2024-06-10 11:52:24.490599] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.lorm47WBwv': No such file or directory 00:43:59.455 [2024-06-10 11:52:24.490629] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:59.455 [2024-06-10 11:52:24.490658] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:59.455 [2024-06-10 11:52:24.490669] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:59.455 [2024-06-10 11:52:24.490679] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:59.455 request: 00:43:59.455 { 00:43:59.455 "name": "nvme0", 00:43:59.455 "trtype": "tcp", 00:43:59.455 "traddr": "127.0.0.1", 00:43:59.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:59.455 "adrfam": "ipv4", 00:43:59.455 "trsvcid": "4420", 00:43:59.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:59.455 "psk": "key0", 00:43:59.455 "method": "bdev_nvme_attach_controller", 00:43:59.455 "req_id": 1 00:43:59.455 } 00:43:59.455 Got JSON-RPC error response 00:43:59.455 response: 00:43:59.455 { 00:43:59.455 "code": -19, 00:43:59.455 "message": "No such device" 00:43:59.455 } 00:43:59.455 11:52:24 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:43:59.455 11:52:24 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:43:59.455 11:52:24 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:43:59.455 11:52:24 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:43:59.455 11:52:24 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:43:59.455 11:52:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:59.714 11:52:24 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:59.714 11:52:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:59.714 11:52:24 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:59.714 11:52:24 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:59.714 11:52:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:59.714 11:52:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:59.714 11:52:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TT1G2x7wnN 00:43:59.714 11:52:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:59.714 11:52:24 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:59.714 11:52:24 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:43:59.714 11:52:24 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:43:59.714 11:52:24 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:43:59.714 11:52:24 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:43:59.714 11:52:24 keyring_file -- nvmf/common.sh@705 -- # python - 00:43:59.714 11:52:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TT1G2x7wnN 00:43:59.714 11:52:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TT1G2x7wnN 00:43:59.714 11:52:24 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.TT1G2x7wnN 00:43:59.714 11:52:24 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TT1G2x7wnN 00:43:59.714 11:52:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TT1G2x7wnN 00:43:59.973 11:52:25 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:59.973 11:52:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:00.232 nvme0n1 00:44:00.232 11:52:25 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:44:00.232 11:52:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:00.232 11:52:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:00.232 11:52:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:00.232 11:52:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:00.232 11:52:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:00.492 11:52:25 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:44:00.492 11:52:25 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:44:00.492 11:52:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:00.751 11:52:25 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:44:00.751 11:52:25 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:44:00.751 11:52:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:00.751 11:52:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:00.751 11:52:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.011 11:52:26 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:44:01.011 11:52:26 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:44:01.011 11:52:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:01.011 11:52:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:01.011 11:52:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:01.011 11:52:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:01.011 11:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.270 11:52:26 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:44:01.270 11:52:26 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:01.270 11:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:01.529 11:52:26 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:44:01.529 11:52:26 keyring_file -- keyring/file.sh@104 -- # jq length 00:44:01.529 11:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:01.789 11:52:26 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:44:01.789 11:52:26 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TT1G2x7wnN 00:44:01.789 11:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TT1G2x7wnN 00:44:02.049 11:52:26 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.OksynNP9En 00:44:02.049 11:52:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.OksynNP9En 00:44:02.049 11:52:27 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:02.049 11:52:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:02.308 nvme0n1 00:44:02.308 11:52:27 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:44:02.308 11:52:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:02.877 11:52:27 keyring_file -- keyring/file.sh@112 -- # config='{ 00:44:02.877 "subsystems": [ 00:44:02.877 { 00:44:02.877 "subsystem": "keyring", 00:44:02.877 "config": [ 00:44:02.877 { 00:44:02.877 "method": "keyring_file_add_key", 00:44:02.877 "params": { 00:44:02.877 "name": "key0", 00:44:02.877 "path": "/tmp/tmp.TT1G2x7wnN" 00:44:02.877 } 00:44:02.877 }, 00:44:02.877 { 00:44:02.877 "method": "keyring_file_add_key", 00:44:02.877 "params": { 00:44:02.877 "name": "key1", 00:44:02.877 "path": "/tmp/tmp.OksynNP9En" 00:44:02.877 } 00:44:02.877 } 00:44:02.877 ] 00:44:02.877 }, 00:44:02.877 { 00:44:02.877 "subsystem": "iobuf", 00:44:02.877 "config": [ 00:44:02.877 { 00:44:02.877 "method": "iobuf_set_options", 00:44:02.877 "params": { 00:44:02.877 "small_pool_count": 8192, 00:44:02.877 "large_pool_count": 1024, 00:44:02.877 "small_bufsize": 8192, 00:44:02.877 "large_bufsize": 135168 00:44:02.877 } 00:44:02.877 } 00:44:02.877 ] 00:44:02.877 }, 00:44:02.877 { 00:44:02.877 "subsystem": "sock", 00:44:02.877 "config": [ 00:44:02.877 { 00:44:02.877 "method": "sock_set_default_impl", 00:44:02.877 "params": { 00:44:02.877 "impl_name": "posix" 00:44:02.877 } 00:44:02.877 }, 00:44:02.877 { 00:44:02.877 "method": "sock_impl_set_options", 00:44:02.877 "params": { 00:44:02.877 "impl_name": "ssl", 00:44:02.877 "recv_buf_size": 4096, 00:44:02.877 "send_buf_size": 4096, 00:44:02.877 "enable_recv_pipe": true, 00:44:02.877 "enable_quickack": false, 00:44:02.877 "enable_placement_id": 0, 00:44:02.877 "enable_zerocopy_send_server": true, 00:44:02.877 "enable_zerocopy_send_client": false, 00:44:02.877 "zerocopy_threshold": 0, 00:44:02.877 "tls_version": 0, 00:44:02.877 "enable_ktls": false 00:44:02.877 } 00:44:02.877 }, 00:44:02.877 { 00:44:02.877 "method": "sock_impl_set_options", 00:44:02.877 "params": { 00:44:02.877 "impl_name": "posix", 00:44:02.877 "recv_buf_size": 2097152, 00:44:02.877 "send_buf_size": 2097152, 00:44:02.877 "enable_recv_pipe": true, 00:44:02.877 "enable_quickack": false, 00:44:02.878 "enable_placement_id": 0, 00:44:02.878 "enable_zerocopy_send_server": true, 00:44:02.878 "enable_zerocopy_send_client": false, 00:44:02.878 "zerocopy_threshold": 0, 00:44:02.878 "tls_version": 0, 00:44:02.878 "enable_ktls": false 00:44:02.878 } 00:44:02.878 } 00:44:02.878 ] 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "subsystem": "vmd", 00:44:02.878 "config": [] 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "subsystem": "accel", 00:44:02.878 "config": [ 00:44:02.878 { 00:44:02.878 "method": "accel_set_options", 00:44:02.878 "params": { 00:44:02.878 "small_cache_size": 128, 00:44:02.878 "large_cache_size": 16, 00:44:02.878 "task_count": 2048, 00:44:02.878 "sequence_count": 2048, 00:44:02.878 "buf_count": 2048 00:44:02.878 } 00:44:02.878 } 00:44:02.878 ] 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "subsystem": "bdev", 00:44:02.878 "config": [ 00:44:02.878 { 00:44:02.878 "method": "bdev_set_options", 00:44:02.878 "params": { 00:44:02.878 "bdev_io_pool_size": 65535, 00:44:02.878 "bdev_io_cache_size": 256, 00:44:02.878 "bdev_auto_examine": true, 00:44:02.878 "iobuf_small_cache_size": 128, 00:44:02.878 "iobuf_large_cache_size": 16 00:44:02.878 } 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "method": "bdev_raid_set_options", 00:44:02.878 "params": { 00:44:02.878 "process_window_size_kb": 1024 00:44:02.878 } 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "method": "bdev_iscsi_set_options", 00:44:02.878 "params": { 00:44:02.878 "timeout_sec": 30 00:44:02.878 } 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "method": "bdev_nvme_set_options", 00:44:02.878 "params": { 00:44:02.878 "action_on_timeout": "none", 00:44:02.878 "timeout_us": 0, 00:44:02.878 "timeout_admin_us": 0, 00:44:02.878 "keep_alive_timeout_ms": 10000, 00:44:02.878 "arbitration_burst": 0, 00:44:02.878 "low_priority_weight": 0, 00:44:02.878 "medium_priority_weight": 0, 00:44:02.878 "high_priority_weight": 0, 00:44:02.878 "nvme_adminq_poll_period_us": 10000, 00:44:02.878 "nvme_ioq_poll_period_us": 0, 00:44:02.878 "io_queue_requests": 512, 00:44:02.878 "delay_cmd_submit": true, 00:44:02.878 "transport_retry_count": 4, 00:44:02.878 "bdev_retry_count": 3, 00:44:02.878 "transport_ack_timeout": 0, 00:44:02.878 "ctrlr_loss_timeout_sec": 0, 00:44:02.878 "reconnect_delay_sec": 0, 00:44:02.878 "fast_io_fail_timeout_sec": 0, 00:44:02.878 "disable_auto_failback": false, 00:44:02.878 "generate_uuids": false, 00:44:02.878 "transport_tos": 0, 00:44:02.878 "nvme_error_stat": false, 00:44:02.878 "rdma_srq_size": 0, 00:44:02.878 "io_path_stat": false, 00:44:02.878 "allow_accel_sequence": false, 00:44:02.878 "rdma_max_cq_size": 0, 00:44:02.878 "rdma_cm_event_timeout_ms": 0, 00:44:02.878 "dhchap_digests": [ 00:44:02.878 "sha256", 00:44:02.878 "sha384", 00:44:02.878 "sha512" 00:44:02.878 ], 00:44:02.878 "dhchap_dhgroups": [ 00:44:02.878 "null", 00:44:02.878 "ffdhe2048", 00:44:02.878 "ffdhe3072", 00:44:02.878 "ffdhe4096", 00:44:02.878 "ffdhe6144", 00:44:02.878 "ffdhe8192" 00:44:02.878 ] 00:44:02.878 } 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "method": "bdev_nvme_attach_controller", 00:44:02.878 "params": { 00:44:02.878 "name": "nvme0", 00:44:02.878 "trtype": "TCP", 00:44:02.878 "adrfam": "IPv4", 00:44:02.878 "traddr": "127.0.0.1", 00:44:02.878 "trsvcid": "4420", 00:44:02.878 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:02.878 "prchk_reftag": false, 00:44:02.878 "prchk_guard": false, 00:44:02.878 "ctrlr_loss_timeout_sec": 0, 00:44:02.878 "reconnect_delay_sec": 0, 00:44:02.878 "fast_io_fail_timeout_sec": 0, 00:44:02.878 "psk": "key0", 00:44:02.878 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:02.878 "hdgst": false, 00:44:02.878 "ddgst": false 00:44:02.878 } 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "method": "bdev_nvme_set_hotplug", 00:44:02.878 "params": { 00:44:02.878 "period_us": 100000, 00:44:02.878 "enable": false 00:44:02.878 } 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "method": "bdev_wait_for_examine" 00:44:02.878 } 00:44:02.878 ] 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "subsystem": "nbd", 00:44:02.878 "config": [] 00:44:02.878 } 00:44:02.878 ] 00:44:02.878 }' 00:44:02.878 11:52:27 keyring_file -- keyring/file.sh@114 -- # killprocess 30463 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 30463 ']' 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@953 -- # kill -0 30463 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@954 -- # uname 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 30463 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 30463' 00:44:02.878 killing process with pid 30463 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@968 -- # kill 30463 00:44:02.878 Received shutdown signal, test time was about 1.000000 seconds 00:44:02.878 00:44:02.878 Latency(us) 00:44:02.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:02.878 =================================================================================================================== 00:44:02.878 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@973 -- # wait 30463 00:44:02.878 11:52:27 keyring_file -- keyring/file.sh@117 -- # bperfpid=32207 00:44:02.878 11:52:27 keyring_file -- keyring/file.sh@119 -- # waitforlisten 32207 /var/tmp/bperf.sock 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 32207 ']' 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:44:02.878 11:52:27 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:02.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:02.878 11:52:27 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:44:02.878 11:52:27 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:44:02.878 "subsystems": [ 00:44:02.878 { 00:44:02.878 "subsystem": "keyring", 00:44:02.878 "config": [ 00:44:02.878 { 00:44:02.878 "method": "keyring_file_add_key", 00:44:02.878 "params": { 00:44:02.878 "name": "key0", 00:44:02.878 "path": "/tmp/tmp.TT1G2x7wnN" 00:44:02.878 } 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "method": "keyring_file_add_key", 00:44:02.878 "params": { 00:44:02.878 "name": "key1", 00:44:02.878 "path": "/tmp/tmp.OksynNP9En" 00:44:02.878 } 00:44:02.878 } 00:44:02.878 ] 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "subsystem": "iobuf", 00:44:02.878 "config": [ 00:44:02.878 { 00:44:02.878 "method": "iobuf_set_options", 00:44:02.878 "params": { 00:44:02.878 "small_pool_count": 8192, 00:44:02.878 "large_pool_count": 1024, 00:44:02.878 "small_bufsize": 8192, 00:44:02.878 "large_bufsize": 135168 00:44:02.878 } 00:44:02.878 } 00:44:02.878 ] 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "subsystem": "sock", 00:44:02.878 "config": [ 00:44:02.878 { 00:44:02.878 "method": "sock_set_default_impl", 00:44:02.878 "params": { 00:44:02.878 "impl_name": "posix" 00:44:02.878 } 00:44:02.878 }, 00:44:02.878 { 00:44:02.878 "method": "sock_impl_set_options", 00:44:02.878 "params": { 00:44:02.878 "impl_name": "ssl", 00:44:02.878 "recv_buf_size": 4096, 00:44:02.878 "send_buf_size": 4096, 00:44:02.878 "enable_recv_pipe": true, 00:44:02.878 "enable_quickack": false, 00:44:02.878 "enable_placement_id": 0, 00:44:02.878 "enable_zerocopy_send_server": true, 00:44:02.878 "enable_zerocopy_send_client": false, 00:44:02.878 "zerocopy_threshold": 0, 00:44:02.878 "tls_version": 0, 00:44:02.878 "enable_ktls": false 00:44:02.878 } 00:44:02.878 }, 00:44:02.879 { 00:44:02.879 "method": "sock_impl_set_options", 00:44:02.879 "params": { 00:44:02.879 "impl_name": "posix", 00:44:02.879 "recv_buf_size": 2097152, 00:44:02.879 "send_buf_size": 2097152, 00:44:02.879 "enable_recv_pipe": true, 00:44:02.879 "enable_quickack": false, 00:44:02.879 "enable_placement_id": 0, 00:44:02.879 "enable_zerocopy_send_server": true, 00:44:02.879 "enable_zerocopy_send_client": false, 00:44:02.879 "zerocopy_threshold": 0, 00:44:02.879 "tls_version": 0, 00:44:02.879 "enable_ktls": false 00:44:02.879 } 00:44:02.879 } 00:44:02.879 ] 00:44:02.879 }, 00:44:02.879 { 00:44:02.879 "subsystem": "vmd", 00:44:02.879 "config": [] 00:44:02.879 }, 00:44:02.879 { 00:44:02.879 "subsystem": "accel", 00:44:02.879 "config": [ 00:44:02.879 { 00:44:02.879 "method": "accel_set_options", 00:44:02.879 "params": { 00:44:02.879 "small_cache_size": 128, 00:44:02.879 "large_cache_size": 16, 00:44:02.879 "task_count": 2048, 00:44:02.879 "sequence_count": 2048, 00:44:02.879 "buf_count": 2048 00:44:02.879 } 00:44:02.879 } 00:44:02.879 ] 00:44:02.879 }, 00:44:02.879 { 00:44:02.879 "subsystem": "bdev", 00:44:02.879 "config": [ 00:44:02.879 { 00:44:02.879 "method": "bdev_set_options", 00:44:02.879 "params": { 00:44:02.879 "bdev_io_pool_size": 65535, 00:44:02.879 "bdev_io_cache_size": 256, 00:44:02.879 "bdev_auto_examine": true, 00:44:02.879 "iobuf_small_cache_size": 128, 00:44:02.879 "iobuf_large_cache_size": 16 00:44:02.879 } 00:44:02.879 }, 00:44:02.879 { 00:44:02.879 "method": "bdev_raid_set_options", 00:44:02.879 "params": { 00:44:02.879 "process_window_size_kb": 1024 00:44:02.879 } 00:44:02.879 }, 00:44:02.879 { 00:44:02.879 "method": "bdev_iscsi_set_options", 00:44:02.879 "params": { 00:44:02.879 "timeout_sec": 30 00:44:02.879 } 00:44:02.879 }, 00:44:02.879 { 00:44:02.879 "method": "bdev_nvme_set_options", 00:44:02.879 "params": { 00:44:02.879 "action_on_timeout": "none", 00:44:02.879 "timeout_us": 0, 00:44:02.879 "timeout_admin_us": 0, 00:44:02.879 "keep_alive_timeout_ms": 10000, 00:44:02.879 "arbitration_burst": 0, 00:44:02.879 "low_priority_weight": 0, 00:44:02.879 "medium_priority_weight": 0, 00:44:02.879 "high_priority_weight": 0, 00:44:02.879 "nvme_adminq_poll_period_us": 10000, 00:44:02.879 "nvme_ioq_poll_period_us": 0, 00:44:02.879 "io_queue_requests": 512, 00:44:02.879 "delay_cmd_submit": true, 00:44:02.879 "transport_retry_count": 4, 00:44:02.879 "bdev_retry_count": 3, 00:44:02.879 "transport_ack_timeout": 0, 00:44:02.879 "ctrlr_loss_timeout_sec": 0, 00:44:02.879 "reconnect_delay_sec": 0, 00:44:02.879 "fast_io_fail_timeout_sec": 0, 00:44:02.879 "disable_auto_failback": false, 00:44:02.879 "generate_uuids": false, 00:44:02.879 "transport_tos": 0, 00:44:02.879 "nvme_error_stat": false, 00:44:02.879 "rdma_srq_size": 0, 00:44:02.879 "io_path_stat": false, 00:44:02.879 "allow_accel_sequence": false, 00:44:02.879 "rdma_max_cq_size": 0, 00:44:02.879 "rdma_cm_event_timeout_ms": 0, 00:44:02.879 "dhchap_digests": [ 00:44:02.879 "sha256", 00:44:02.879 "sha384", 00:44:02.879 "sha512" 00:44:02.879 ], 00:44:02.879 "dhchap_dhgroups": [ 00:44:02.879 "null", 00:44:02.879 "ffdhe2048", 00:44:02.879 "ffdhe3072", 00:44:02.879 "ffdhe4096", 00:44:02.879 "ffdhe6144", 00:44:02.879 "ffdhe8192" 00:44:02.879 ] 00:44:02.879 } 00:44:02.879 }, 00:44:02.879 { 00:44:02.879 "method": "bdev_nvme_attach_controller", 00:44:02.879 "params": { 00:44:02.879 "name": "nvme0", 00:44:02.879 "trtype": "TCP", 00:44:02.879 "adrfam": "IPv4", 00:44:02.879 "traddr": "127.0.0.1", 00:44:02.879 "trsvcid": "4420", 00:44:02.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:02.879 "prchk_reftag": false, 00:44:02.879 "prchk_guard": false, 00:44:02.879 "ctrlr_loss_timeout_sec": 0, 00:44:02.879 "reconnect_delay_sec": 0, 00:44:02.879 "fast_io_fail_timeout_sec": 0, 00:44:02.879 "psk": "key0", 00:44:02.879 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:02.879 "hdgst": false, 00:44:02.879 "ddgst": false 00:44:02.879 } 00:44:02.879 }, 00:44:02.879 { 00:44:02.879 "method": "bdev_nvme_set_hotplug", 00:44:02.879 "params": { 00:44:02.879 "period_us": 100000, 00:44:02.879 "enable": false 00:44:02.879 } 00:44:02.879 }, 00:44:02.879 { 00:44:02.879 "method": "bdev_wait_for_examine" 00:44:02.879 } 00:44:02.879 ] 00:44:02.879 }, 00:44:02.879 { 00:44:02.879 "subsystem": "nbd", 00:44:02.879 "config": [] 00:44:02.879 } 00:44:02.879 ] 00:44:02.879 }' 00:44:02.879 11:52:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:03.138 [2024-06-10 11:52:27.981618] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:44:03.138 [2024-06-10 11:52:27.981684] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid32207 ] 00:44:03.138 EAL: No free 2048 kB hugepages reported on node 1 00:44:03.138 [2024-06-10 11:52:28.092608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:03.138 [2024-06-10 11:52:28.175871] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:44:03.398 [2024-06-10 11:52:28.339655] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:03.967 11:52:28 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:44:03.967 11:52:28 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:44:03.967 11:52:28 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:44:03.967 11:52:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:03.967 11:52:28 keyring_file -- keyring/file.sh@120 -- # jq length 00:44:03.967 11:52:29 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:44:03.967 11:52:29 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:44:03.967 11:52:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:03.967 11:52:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:03.967 11:52:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:03.967 11:52:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:03.967 11:52:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:04.226 11:52:29 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:04.226 11:52:29 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:44:04.226 11:52:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:04.226 11:52:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:04.226 11:52:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:04.226 11:52:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:04.226 11:52:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:04.485 11:52:29 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:44:04.485 11:52:29 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:44:04.485 11:52:29 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:44:04.485 11:52:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:04.745 11:52:29 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:44:04.745 11:52:29 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:04.745 11:52:29 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.TT1G2x7wnN /tmp/tmp.OksynNP9En 00:44:04.745 11:52:29 keyring_file -- keyring/file.sh@20 -- # killprocess 32207 00:44:04.745 11:52:29 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 32207 ']' 00:44:04.745 11:52:29 keyring_file -- common/autotest_common.sh@953 -- # kill -0 32207 00:44:04.745 11:52:29 keyring_file -- common/autotest_common.sh@954 -- # uname 00:44:04.745 11:52:29 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:44:04.745 11:52:29 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 32207 00:44:04.745 11:52:29 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:44:04.745 11:52:29 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:44:04.745 11:52:29 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 32207' 00:44:04.745 killing process with pid 32207 00:44:04.745 11:52:29 keyring_file -- common/autotest_common.sh@968 -- # kill 32207 00:44:04.745 Received shutdown signal, test time was about 1.000000 seconds 00:44:04.745 00:44:04.745 Latency(us) 00:44:04.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:04.745 =================================================================================================================== 00:44:04.745 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:04.745 11:52:29 keyring_file -- common/autotest_common.sh@973 -- # wait 32207 00:44:05.004 11:52:30 keyring_file -- keyring/file.sh@21 -- # killprocess 30198 00:44:05.004 11:52:30 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 30198 ']' 00:44:05.004 11:52:30 keyring_file -- common/autotest_common.sh@953 -- # kill -0 30198 00:44:05.004 11:52:30 keyring_file -- common/autotest_common.sh@954 -- # uname 00:44:05.004 11:52:30 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:44:05.004 11:52:30 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 30198 00:44:05.004 11:52:30 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:44:05.004 11:52:30 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:44:05.004 11:52:30 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 30198' 00:44:05.004 killing process with pid 30198 00:44:05.004 11:52:30 keyring_file -- common/autotest_common.sh@968 -- # kill 30198 00:44:05.004 [2024-06-10 11:52:30.068374] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:44:05.004 11:52:30 keyring_file -- common/autotest_common.sh@973 -- # wait 30198 00:44:05.572 00:44:05.572 real 0m14.346s 00:44:05.572 user 0m33.925s 00:44:05.573 sys 0m3.996s 00:44:05.573 11:52:30 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:05.573 11:52:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:05.573 ************************************ 00:44:05.573 END TEST keyring_file 00:44:05.573 ************************************ 00:44:05.573 11:52:30 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:44:05.573 11:52:30 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:05.573 11:52:30 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:44:05.573 11:52:30 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:44:05.573 11:52:30 -- common/autotest_common.sh@10 -- # set +x 00:44:05.573 ************************************ 00:44:05.573 START TEST keyring_linux 00:44:05.573 ************************************ 00:44:05.573 11:52:30 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:05.573 * Looking for test storage... 00:44:05.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:05.573 11:52:30 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:05.573 11:52:30 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:05.573 11:52:30 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:05.573 11:52:30 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:05.573 11:52:30 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.573 11:52:30 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.573 11:52:30 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.573 11:52:30 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:05.573 11:52:30 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:05.573 11:52:30 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:05.573 11:52:30 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:05.573 11:52:30 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:05.573 11:52:30 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:05.573 11:52:30 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:05.573 11:52:30 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@705 -- # python - 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:05.573 /tmp/:spdk-test:key0 00:44:05.573 11:52:30 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:05.573 11:52:30 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:44:05.573 11:52:30 keyring_linux -- nvmf/common.sh@705 -- # python - 00:44:05.833 11:52:30 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:05.833 11:52:30 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:05.833 /tmp/:spdk-test:key1 00:44:05.833 11:52:30 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=32826 00:44:05.833 11:52:30 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:05.833 11:52:30 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 32826 00:44:05.833 11:52:30 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 32826 ']' 00:44:05.833 11:52:30 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:05.833 11:52:30 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:44:05.833 11:52:30 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:05.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:05.833 11:52:30 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:44:05.833 11:52:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:05.833 [2024-06-10 11:52:30.784756] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:44:05.833 [2024-06-10 11:52:30.784826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid32826 ] 00:44:05.833 EAL: No free 2048 kB hugepages reported on node 1 00:44:05.833 [2024-06-10 11:52:30.903377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:06.093 [2024-06-10 11:52:30.988613] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:44:06.661 11:52:31 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:44:06.661 11:52:31 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:44:06.661 11:52:31 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:06.661 11:52:31 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:06.661 11:52:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:06.661 [2024-06-10 11:52:31.682767] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:06.661 null0 00:44:06.661 [2024-06-10 11:52:31.714807] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:06.661 [2024-06-10 11:52:31.715245] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:06.661 11:52:31 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:06.661 11:52:31 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:06.661 760836948 00:44:06.661 11:52:31 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:06.661 411291904 00:44:06.661 11:52:31 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=32999 00:44:06.661 11:52:31 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 32999 /var/tmp/bperf.sock 00:44:06.661 11:52:31 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:06.661 11:52:31 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 32999 ']' 00:44:06.661 11:52:31 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:06.661 11:52:31 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:44:06.661 11:52:31 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:06.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:06.661 11:52:31 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:44:06.661 11:52:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:06.921 [2024-06-10 11:52:31.792934] Starting SPDK v24.09-pre git sha1 1e8a0c991 / DPDK 24.03.0 initialization... 00:44:06.921 [2024-06-10 11:52:31.792994] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid32999 ] 00:44:06.921 EAL: No free 2048 kB hugepages reported on node 1 00:44:06.921 [2024-06-10 11:52:31.903067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:06.921 [2024-06-10 11:52:31.990388] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:44:07.858 11:52:32 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:44:07.858 11:52:32 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:44:07.858 11:52:32 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:07.858 11:52:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:07.858 11:52:32 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:07.858 11:52:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:08.427 11:52:33 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:08.427 11:52:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:08.427 [2024-06-10 11:52:33.443386] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:08.427 nvme0n1 00:44:08.427 11:52:33 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:08.427 11:52:33 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:08.427 11:52:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:08.686 11:52:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:08.686 11:52:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:08.686 11:52:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:08.686 11:52:33 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:08.686 11:52:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:08.686 11:52:33 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:08.686 11:52:33 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:08.686 11:52:33 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:08.686 11:52:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:08.686 11:52:33 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:08.946 11:52:33 keyring_linux -- keyring/linux.sh@25 -- # sn=760836948 00:44:08.946 11:52:33 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:08.946 11:52:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:08.946 11:52:33 keyring_linux -- keyring/linux.sh@26 -- # [[ 760836948 == \7\6\0\8\3\6\9\4\8 ]] 00:44:08.946 11:52:33 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 760836948 00:44:08.946 11:52:34 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:08.946 11:52:34 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:09.205 Running I/O for 1 seconds... 00:44:10.142 00:44:10.142 Latency(us) 00:44:10.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:10.142 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:10.142 nvme0n1 : 1.01 9949.82 38.87 0.00 0.00 12793.71 3486.52 16882.07 00:44:10.142 =================================================================================================================== 00:44:10.142 Total : 9949.82 38.87 0.00 0.00 12793.71 3486.52 16882.07 00:44:10.142 0 00:44:10.142 11:52:35 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:10.142 11:52:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:10.401 11:52:35 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:10.401 11:52:35 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:10.401 11:52:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:10.401 11:52:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:10.401 11:52:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:10.401 11:52:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:10.660 11:52:35 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:10.660 11:52:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:10.660 11:52:35 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:10.660 11:52:35 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:10.660 11:52:35 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:44:10.660 11:52:35 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:10.660 11:52:35 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:44:10.660 11:52:35 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:10.660 11:52:35 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:44:10.660 11:52:35 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:44:10.660 11:52:35 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:10.660 11:52:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:10.660 [2024-06-10 11:52:35.732735] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:10.660 [2024-06-10 11:52:35.733406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76f50 (107): Transport endpoint is not connected 00:44:10.660 [2024-06-10 11:52:35.734400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76f50 (9): Bad file descriptor 00:44:10.660 [2024-06-10 11:52:35.735400] nvme_ctrlr.c:4095:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:44:10.660 [2024-06-10 11:52:35.735415] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:10.661 [2024-06-10 11:52:35.735427] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:44:10.661 request: 00:44:10.661 { 00:44:10.661 "name": "nvme0", 00:44:10.661 "trtype": "tcp", 00:44:10.661 "traddr": "127.0.0.1", 00:44:10.661 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:10.661 "adrfam": "ipv4", 00:44:10.661 "trsvcid": "4420", 00:44:10.661 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:10.661 "psk": ":spdk-test:key1", 00:44:10.661 "method": "bdev_nvme_attach_controller", 00:44:10.661 "req_id": 1 00:44:10.661 } 00:44:10.661 Got JSON-RPC error response 00:44:10.661 response: 00:44:10.661 { 00:44:10.661 "code": -5, 00:44:10.661 "message": "Input/output error" 00:44:10.661 } 00:44:10.661 11:52:35 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:44:10.661 11:52:35 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:44:10.661 11:52:35 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:44:10.661 11:52:35 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:44:10.661 11:52:35 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:10.661 11:52:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:10.661 11:52:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:10.661 11:52:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:10.661 11:52:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:10.661 11:52:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:10.661 11:52:35 keyring_linux -- keyring/linux.sh@33 -- # sn=760836948 00:44:10.661 11:52:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 760836948 00:44:10.661 1 links removed 00:44:10.661 11:52:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:10.661 11:52:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:10.661 11:52:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:10.920 11:52:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:10.920 11:52:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:10.920 11:52:35 keyring_linux -- keyring/linux.sh@33 -- # sn=411291904 00:44:10.920 11:52:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 411291904 00:44:10.920 1 links removed 00:44:10.920 11:52:35 keyring_linux -- keyring/linux.sh@41 -- # killprocess 32999 00:44:10.920 11:52:35 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 32999 ']' 00:44:10.920 11:52:35 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 32999 00:44:10.920 11:52:35 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:44:10.920 11:52:35 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:44:10.920 11:52:35 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 32999 00:44:10.920 11:52:35 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:44:10.920 11:52:35 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:44:10.920 11:52:35 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 32999' 00:44:10.920 killing process with pid 32999 00:44:10.920 11:52:35 keyring_linux -- common/autotest_common.sh@968 -- # kill 32999 00:44:10.920 Received shutdown signal, test time was about 1.000000 seconds 00:44:10.920 00:44:10.920 Latency(us) 00:44:10.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:10.920 =================================================================================================================== 00:44:10.920 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:10.920 11:52:35 keyring_linux -- common/autotest_common.sh@973 -- # wait 32999 00:44:10.920 11:52:36 keyring_linux -- keyring/linux.sh@42 -- # killprocess 32826 00:44:10.920 11:52:36 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 32826 ']' 00:44:10.920 11:52:36 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 32826 00:44:10.920 11:52:36 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:44:11.180 11:52:36 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:44:11.180 11:52:36 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 32826 00:44:11.180 11:52:36 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:44:11.180 11:52:36 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:44:11.180 11:52:36 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 32826' 00:44:11.180 killing process with pid 32826 00:44:11.180 11:52:36 keyring_linux -- common/autotest_common.sh@968 -- # kill 32826 00:44:11.180 11:52:36 keyring_linux -- common/autotest_common.sh@973 -- # wait 32826 00:44:11.439 00:44:11.439 real 0m5.907s 00:44:11.439 user 0m10.597s 00:44:11.439 sys 0m1.881s 00:44:11.439 11:52:36 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:44:11.439 11:52:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:11.439 ************************************ 00:44:11.439 END TEST keyring_linux 00:44:11.439 ************************************ 00:44:11.439 11:52:36 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:44:11.439 11:52:36 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:44:11.439 11:52:36 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:44:11.439 11:52:36 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:44:11.439 11:52:36 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:44:11.439 11:52:36 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:44:11.439 11:52:36 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:44:11.439 11:52:36 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:44:11.439 11:52:36 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:44:11.439 11:52:36 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:44:11.439 11:52:36 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:44:11.439 11:52:36 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:44:11.439 11:52:36 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:44:11.439 11:52:36 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:44:11.439 11:52:36 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:44:11.439 11:52:36 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:44:11.439 11:52:36 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:44:11.439 11:52:36 -- common/autotest_common.sh@723 -- # xtrace_disable 00:44:11.439 11:52:36 -- common/autotest_common.sh@10 -- # set +x 00:44:11.439 11:52:36 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:44:11.439 11:52:36 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:44:11.439 11:52:36 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:44:11.439 11:52:36 -- common/autotest_common.sh@10 -- # set +x 00:44:18.009 INFO: APP EXITING 00:44:18.009 INFO: killing all VMs 00:44:18.009 INFO: killing vhost app 00:44:18.009 WARN: no vhost pid file found 00:44:18.009 INFO: EXIT DONE 00:44:22.202 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:44:22.202 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:44:26.395 Cleaning 00:44:26.395 Removing: /var/run/dpdk/spdk0/config 00:44:26.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:26.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:26.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:26.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:26.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:26.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:26.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:26.395 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:26.395 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:26.395 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:26.395 Removing: /var/run/dpdk/spdk1/config 00:44:26.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:26.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:26.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:26.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:26.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:26.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:26.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:26.395 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:26.395 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:26.395 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:26.395 Removing: /var/run/dpdk/spdk1/mp_socket 00:44:26.395 Removing: /var/run/dpdk/spdk2/config 00:44:26.395 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:26.396 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:26.396 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:26.396 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:26.396 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:26.396 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:26.396 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:26.396 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:26.396 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:26.396 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:26.396 Removing: /var/run/dpdk/spdk3/config 00:44:26.396 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:26.396 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:26.396 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:26.396 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:26.396 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:26.396 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:26.396 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:26.396 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:26.396 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:26.396 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:26.396 Removing: /var/run/dpdk/spdk4/config 00:44:26.396 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:26.396 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:26.396 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:26.396 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:26.396 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:26.396 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:26.396 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:26.396 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:26.396 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:26.396 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:26.396 Removing: /dev/shm/bdev_svc_trace.1 00:44:26.396 Removing: /dev/shm/nvmf_trace.0 00:44:26.396 Removing: /dev/shm/spdk_tgt_trace.pid3765270 00:44:26.396 Removing: /var/run/dpdk/spdk0 00:44:26.396 Removing: /var/run/dpdk/spdk1 00:44:26.396 Removing: /var/run/dpdk/spdk2 00:44:26.396 Removing: /var/run/dpdk/spdk3 00:44:26.396 Removing: /var/run/dpdk/spdk4 00:44:26.396 Removing: /var/run/dpdk/spdk_pid20752 00:44:26.396 Removing: /var/run/dpdk/spdk_pid21279 00:44:26.396 Removing: /var/run/dpdk/spdk_pid21807 00:44:26.396 Removing: /var/run/dpdk/spdk_pid24519 00:44:26.396 Removing: /var/run/dpdk/spdk_pid24984 00:44:26.396 Removing: /var/run/dpdk/spdk_pid25509 00:44:26.396 Removing: /var/run/dpdk/spdk_pid2969 00:44:26.396 Removing: /var/run/dpdk/spdk_pid30198 00:44:26.396 Removing: /var/run/dpdk/spdk_pid30463 00:44:26.396 Removing: /var/run/dpdk/spdk_pid32207 00:44:26.396 Removing: /var/run/dpdk/spdk_pid32826 00:44:26.396 Removing: /var/run/dpdk/spdk_pid32999 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3762739 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3763983 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3765270 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3765917 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3766992 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3767273 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3768229 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3768402 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3768772 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3770501 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3771958 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3772267 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3772656 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3773110 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3773479 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3773691 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3773892 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3774178 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3775230 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3778415 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3778768 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3779261 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3779286 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3779847 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3779983 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3780664 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3780694 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3781108 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3781263 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3781555 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3781688 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3782205 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3782485 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3782806 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3783116 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3783139 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3783456 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3783742 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3784024 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3784303 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3784558 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3784809 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3785071 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3785334 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3785599 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3785848 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3786124 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3786376 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3786644 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3786916 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3787200 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3787479 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3787774 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3788098 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3788472 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3788753 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3789352 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3789673 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3790130 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3794846 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3848668 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3854198 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3865661 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3872145 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3877404 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3877951 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3892487 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3892491 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3893292 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3894304 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3895146 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3895684 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3895822 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3896115 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3896219 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3896229 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3897210 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3898078 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3898901 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3899570 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3899673 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3899946 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3901361 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3902492 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3911972 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3912331 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3917749 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3924672 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3927657 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3940714 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3951710 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3953561 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3954621 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3974866 00:44:26.396 Removing: /var/run/dpdk/spdk_pid3979840 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4011441 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4017137 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4018846 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4021271 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4021469 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4021681 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4021859 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4022669 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4024533 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4025670 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4026240 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4028648 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4029222 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4030061 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4035298 00:44:26.396 Removing: /var/run/dpdk/spdk_pid4047290 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4051440 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4058438 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4059819 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4061580 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4067334 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4072516 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4082081 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4082087 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4087822 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4087962 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4088168 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4088675 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4088699 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4094311 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4094892 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4100412 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4103206 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4109807 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4117024 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4127339 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4135775 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4135779 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4158195 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4158873 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4159546 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4160193 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4161258 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4162312 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4163121 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4163924 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4169207 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4169475 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4176551 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4176737 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4179076 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4188009 00:44:26.656 Removing: /var/run/dpdk/spdk_pid4188071 00:44:26.656 Removing: /var/run/dpdk/spdk_pid5089 00:44:26.656 Removing: /var/run/dpdk/spdk_pid6193 00:44:26.656 Removing: /var/run/dpdk/spdk_pid8244 00:44:26.656 Removing: /var/run/dpdk/spdk_pid839 00:44:26.656 Removing: /var/run/dpdk/spdk_pid9511 00:44:26.656 Clean 00:44:26.656 11:52:51 -- common/autotest_common.sh@1450 -- # return 0 00:44:26.656 11:52:51 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:44:26.656 11:52:51 -- common/autotest_common.sh@729 -- # xtrace_disable 00:44:26.656 11:52:51 -- common/autotest_common.sh@10 -- # set +x 00:44:26.915 11:52:51 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:44:26.915 11:52:51 -- common/autotest_common.sh@729 -- # xtrace_disable 00:44:26.915 11:52:51 -- common/autotest_common.sh@10 -- # set +x 00:44:26.915 11:52:51 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:26.915 11:52:51 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:26.915 11:52:51 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:26.915 11:52:51 -- spdk/autotest.sh@391 -- # hash lcov 00:44:26.915 11:52:51 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:44:26.915 11:52:51 -- spdk/autotest.sh@393 -- # hostname 00:44:26.915 11:52:51 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:27.178 geninfo: WARNING: invalid characters removed from testname! 00:44:59.364 11:53:19 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:59.364 11:53:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:00.301 11:53:25 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:02.837 11:53:27 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:06.125 11:53:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:08.030 11:53:32 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:10.566 11:53:35 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:10.566 11:53:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:10.566 11:53:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:45:10.566 11:53:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:10.566 11:53:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:10.566 11:53:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:10.566 11:53:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:10.566 11:53:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:10.566 11:53:35 -- paths/export.sh@5 -- $ export PATH 00:45:10.566 11:53:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:10.566 11:53:35 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:45:10.566 11:53:35 -- common/autobuild_common.sh@437 -- $ date +%s 00:45:10.566 11:53:35 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718013215.XXXXXX 00:45:10.566 11:53:35 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718013215.6mqj3n 00:45:10.566 11:53:35 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:45:10.566 11:53:35 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:45:10.566 11:53:35 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:45:10.566 11:53:35 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:45:10.566 11:53:35 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:45:10.566 11:53:35 -- common/autobuild_common.sh@453 -- $ get_config_params 00:45:10.566 11:53:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:45:10.566 11:53:35 -- common/autotest_common.sh@10 -- $ set +x 00:45:10.566 11:53:35 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:45:10.566 11:53:35 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:45:10.566 11:53:35 -- pm/common@17 -- $ local monitor 00:45:10.566 11:53:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:45:10.566 11:53:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:45:10.566 11:53:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:45:10.566 11:53:35 -- pm/common@21 -- $ date +%s 00:45:10.566 11:53:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:45:10.566 11:53:35 -- pm/common@21 -- $ date +%s 00:45:10.566 11:53:35 -- pm/common@25 -- $ sleep 1 00:45:10.566 11:53:35 -- pm/common@21 -- $ date +%s 00:45:10.566 11:53:35 -- pm/common@21 -- $ date +%s 00:45:10.566 11:53:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718013215 00:45:10.566 11:53:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718013215 00:45:10.566 11:53:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718013215 00:45:10.567 11:53:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718013215 00:45:10.826 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718013215_collect-vmstat.pm.log 00:45:10.826 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718013215_collect-cpu-temp.pm.log 00:45:10.826 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718013215_collect-cpu-load.pm.log 00:45:10.826 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718013215_collect-bmc-pm.bmc.pm.log 00:45:11.764 11:53:36 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:45:11.764 11:53:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:45:11.764 11:53:36 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:11.764 11:53:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:45:11.764 11:53:36 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:45:11.764 11:53:36 -- spdk/autopackage.sh@19 -- $ timing_finish 00:45:11.764 11:53:36 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:11.764 11:53:36 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:45:11.764 11:53:36 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:11.764 11:53:36 -- spdk/autopackage.sh@20 -- $ exit 0 00:45:11.764 11:53:36 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:45:11.764 11:53:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:45:11.764 11:53:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:45:11.764 11:53:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:45:11.764 11:53:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:45:11.764 11:53:36 -- pm/common@44 -- $ pid=48330 00:45:11.764 11:53:36 -- pm/common@50 -- $ kill -TERM 48330 00:45:11.764 11:53:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:45:11.764 11:53:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:45:11.764 11:53:36 -- pm/common@44 -- $ pid=48332 00:45:11.764 11:53:36 -- pm/common@50 -- $ kill -TERM 48332 00:45:11.764 11:53:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:45:11.764 11:53:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:45:11.764 11:53:36 -- pm/common@44 -- $ pid=48333 00:45:11.764 11:53:36 -- pm/common@50 -- $ kill -TERM 48333 00:45:11.764 11:53:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:45:11.764 11:53:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:45:11.764 11:53:36 -- pm/common@44 -- $ pid=48356 00:45:11.765 11:53:36 -- pm/common@50 -- $ sudo -E kill -TERM 48356 00:45:11.765 + [[ -n 3642145 ]] 00:45:11.765 + sudo kill 3642145 00:45:11.774 [Pipeline] } 00:45:11.795 [Pipeline] // stage 00:45:11.801 [Pipeline] } 00:45:11.822 [Pipeline] // timeout 00:45:11.829 [Pipeline] } 00:45:11.847 [Pipeline] // catchError 00:45:11.854 [Pipeline] } 00:45:11.872 [Pipeline] // wrap 00:45:11.879 [Pipeline] } 00:45:11.895 [Pipeline] // catchError 00:45:11.906 [Pipeline] stage 00:45:11.908 [Pipeline] { (Epilogue) 00:45:11.924 [Pipeline] catchError 00:45:11.926 [Pipeline] { 00:45:11.941 [Pipeline] echo 00:45:11.943 Cleanup processes 00:45:11.950 [Pipeline] sh 00:45:12.235 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:12.235 48435 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:45:12.235 48776 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:12.251 [Pipeline] sh 00:45:12.538 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:12.538 ++ grep -v 'sudo pgrep' 00:45:12.538 ++ awk '{print $1}' 00:45:12.538 + sudo kill -9 48435 00:45:12.552 [Pipeline] sh 00:45:12.836 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:12.836 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:45:20.957 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:45:26.244 [Pipeline] sh 00:45:26.591 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:26.591 Artifacts sizes are good 00:45:26.606 [Pipeline] archiveArtifacts 00:45:26.614 Archiving artifacts 00:45:26.793 [Pipeline] sh 00:45:27.078 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:27.092 [Pipeline] cleanWs 00:45:27.118 [WS-CLEANUP] Deleting project workspace... 00:45:27.118 [WS-CLEANUP] Deferred wipeout is used... 00:45:27.125 [WS-CLEANUP] done 00:45:27.127 [Pipeline] } 00:45:27.146 [Pipeline] // catchError 00:45:27.157 [Pipeline] sh 00:45:27.437 + logger -p user.info -t JENKINS-CI 00:45:27.447 [Pipeline] } 00:45:27.465 [Pipeline] // stage 00:45:27.471 [Pipeline] } 00:45:27.489 [Pipeline] // node 00:45:27.495 [Pipeline] End of Pipeline 00:45:27.528 Finished: SUCCESS